2021-09-10 06:44:05

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCHv3 00/12] nvme: In-band authentication support

Hi all,

recent updates to the NVMe spec have added definitions for in-band
authentication, and seeing that it provides some real benefit
especially for NVMe-TCP here's an attempt to implement it.

Tricky bit here is that the specification orients itself on TLS 1.3,
but supports only the FFDHE groups. Which of course the kernel doesn't
support. I've been able to come up with a patch for this, but as this
is my first attempt to fix anything in the crypto area I would invite
people more familiar with these matters to have a look.

Also note that this is just for in-band authentication. Secure
concatenation (ie starting TLS with the negotiated parameters) is not
implemented; one would need to update the kernel TLS implementation
for this, which at this time is beyond scope.

As usual, comments and reviews are welcome.

Changes to v2:
- Dropped non-standard algorithms
- Reworked base64 based on fs/crypto/fname.c
- Fixup crash with no keys

Changes to the original submission:
- Included reviews from Vladislav
- Included reviews from Sagi
- Implemented re-authentication support
- Fixed up key handling

Hannes Reinecke (12):
crypto: add crypto_has_shash()
crypto: add crypto_has_kpp()
crypto/ffdhe: Finite Field DH Ephemeral Parameters
lib/base64: RFC4648-compliant base64 encoding
nvme: add definitions for NVMe In-Band authentication
nvme-fabrics: decode 'authentication required' connect error
nvme: Implement In-Band authentication
nvme-auth: Diffie-Hellman key exchange support
nvmet: Parse fabrics commands on all queues
nvmet: Implement basic In-Band Authentication
nvmet-auth: Diffie-Hellman key exchange support
nvmet-auth: expire authentication sessions

crypto/Kconfig | 8 +
crypto/Makefile | 1 +
crypto/ffdhe_helper.c | 880 +++++++++++++++
crypto/kpp.c | 6 +
crypto/shash.c | 6 +
drivers/nvme/host/Kconfig | 13 +
drivers/nvme/host/Makefile | 1 +
drivers/nvme/host/auth.c | 1441 ++++++++++++++++++++++++
drivers/nvme/host/auth.h | 33 +
drivers/nvme/host/core.c | 79 +-
drivers/nvme/host/fabrics.c | 77 +-
drivers/nvme/host/fabrics.h | 6 +
drivers/nvme/host/nvme.h | 30 +
drivers/nvme/host/trace.c | 32 +
drivers/nvme/target/Kconfig | 12 +
drivers/nvme/target/Makefile | 1 +
drivers/nvme/target/admin-cmd.c | 4 +
drivers/nvme/target/auth.c | 442 ++++++++
drivers/nvme/target/configfs.c | 102 +-
drivers/nvme/target/core.c | 10 +
drivers/nvme/target/fabrics-cmd-auth.c | 506 +++++++++
drivers/nvme/target/fabrics-cmd.c | 30 +-
drivers/nvme/target/nvmet.h | 70 ++
include/crypto/ffdhe.h | 24 +
include/crypto/hash.h | 2 +
include/crypto/kpp.h | 2 +
include/linux/base64.h | 16 +
include/linux/nvme.h | 186 ++-
lib/Makefile | 2 +-
lib/base64.c | 100 ++
30 files changed, 4111 insertions(+), 11 deletions(-)
create mode 100644 crypto/ffdhe_helper.c
create mode 100644 drivers/nvme/host/auth.c
create mode 100644 drivers/nvme/host/auth.h
create mode 100644 drivers/nvme/target/auth.c
create mode 100644 drivers/nvme/target/fabrics-cmd-auth.c
create mode 100644 include/crypto/ffdhe.h
create mode 100644 include/linux/base64.h
create mode 100644 lib/base64.c

--
2.29.2


2021-09-10 06:44:05

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 04/12] lib/base64: RFC4648-compliant base64 encoding

Add RFC4648-compliant base64 encoding and decoding routines, based on
the base64url encoding in fs/crypto/fname.c.

Signed-off-by: Hannes Reinecke <[email protected]>
---
include/linux/base64.h | 16 +++++++
lib/Makefile | 2 +-
lib/base64.c | 100 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 117 insertions(+), 1 deletion(-)
create mode 100644 include/linux/base64.h
create mode 100644 lib/base64.c

diff --git a/include/linux/base64.h b/include/linux/base64.h
new file mode 100644
index 000000000000..660d4cb1ef31
--- /dev/null
+++ b/include/linux/base64.h
@@ -0,0 +1,16 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * base64 encoding, lifted from fs/crypto/fname.c.
+ */
+
+#ifndef _LINUX_BASE64_H
+#define _LINUX_BASE64_H
+
+#include <linux/types.h>
+
+#define BASE64_CHARS(nbytes) DIV_ROUND_UP((nbytes) * 4, 3)
+
+int base64_encode(const u8 *src, int len, char *dst);
+int base64_decode(const char *src, int len, u8 *dst);
+
+#endif /* _LINUX_BASE64_H */
diff --git a/lib/Makefile b/lib/Makefile
index 5efd1b435a37..ce964f013412 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -46,7 +46,7 @@ obj-y += bcd.o sort.o parser.o debug_locks.o random32.o \
bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \
list_sort.o uuid.o iov_iter.o clz_ctz.o \
bsearch.o find_bit.o llist.o memweight.o kfifo.o \
- percpu-refcount.o rhashtable.o \
+ percpu-refcount.o rhashtable.o base64.o \
once.o refcount.o usercopy.o errseq.o bucket_locks.o \
generic-radix-tree.o
obj-$(CONFIG_STRING_SELFTEST) += test_string.o
diff --git a/lib/base64.c b/lib/base64.c
new file mode 100644
index 000000000000..9f271665cbb1
--- /dev/null
+++ b/lib/base64.c
@@ -0,0 +1,100 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * base64.c - RFC4648-compliant base64 encoding
+ *
+ * Copyright (c) 2020 Hannes Reinecke, SUSE
+ *
+ * Based on the base64url routines from fs/crypto/fname.c
+ * (which are using the URL-safe base64 encoding),
+ * modified to use the standard coding table from RFC4648 section 4.
+ */
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/export.h>
+#include <linux/string.h>
+#include <linux/base64.h>
+
+static const char base64_table[65] =
+ "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
+
+/**
+ * base64_encode() - base64-encode some binary data
+ * @src: the binary data to encode
+ * @srclen: the length of @src in bytes
+ * @dst: (output) the base64-encoded string. Not NUL-terminated.
+ *
+ * Encodes data using base64 encoding, i.e. the "Base 64 Encoding" specified
+ * by RFC 4648, including the '='-padding.
+ *
+ * Return: the length of the resulting base64url-encoded string in bytes.
+ */
+int base64_encode(const u8 *src, int srclen, char *dst)
+{
+ u32 ac = 0;
+ int bits = 0;
+ int i;
+ char *cp = dst;
+
+ for (i = 0; i < srclen; i++) {
+ ac = (ac << 8) | src[i];
+ bits += 8;
+ do {
+ bits -= 6;
+ *cp++ = base64_table[(ac >> bits) & 0x3f];
+ } while (bits >= 6);
+ }
+ if (bits) {
+ *cp++ = base64_table[(ac << (6 - bits)) & 0x3f];
+ bits -= 6;
+ }
+ while (bits < 0) {
+ *cp++ = '=';
+ bits += 2;
+ }
+ return cp - dst;
+}
+EXPORT_SYMBOL_GPL(base64_encode);
+
+/**
+ * base64_decode() - base64-decode a string
+ * @src: the string to decode. Doesn't need to be NUL-terminated.
+ * @srclen: the length of @src in bytes
+ * @dst: (output) the decoded binary data
+ *
+ * Decodes a string using base64url encoding, i.e. the "Base 64 Encoding"
+ * specified by RFC 4648, including the '='-padding.
+ *
+ * This implementation hasn't been optimized for performance.
+ *
+ * Return: the length of the resulting decoded binary data in bytes,
+ * or -1 if the string isn't a valid base64 string.
+ */
+int base64_decode(const char *src, int srclen, u8 *dst)
+{
+ u32 ac = 0;
+ int bits = 0;
+ int i;
+ u8 *bp = dst;
+
+ for (i = 0; i < srclen; i++) {
+ const char *p = strchr(base64_table, src[i]);
+
+ if (src[i] == '=') {
+ ac = (ac << 6);
+ continue;
+ }
+ if (p == NULL || src[i] == 0)
+ return -1;
+ ac = (ac << 6) | (p - base64_table);
+ bits += 6;
+ if (bits >= 8) {
+ bits -= 8;
+ *bp++ = (u8)(ac >> bits);
+ }
+ }
+ if (ac & ((1 << bits) - 1))
+ return -1;
+ return bp - dst;
+}
+EXPORT_SYMBOL_GPL(base64_decode);
--
2.29.2

2021-09-10 06:44:05

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 06/12] nvme-fabrics: decode 'authentication required' connect error

The 'connect' command might fail with NVME_SC_AUTH_REQUIRED, so we
should be decoding this error, too.

Signed-off-by: Hannes Reinecke <[email protected]>
---
drivers/nvme/host/fabrics.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 668c6bb7a567..9a8eade7cd23 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -332,6 +332,10 @@ static void nvmf_log_connect_error(struct nvme_ctrl *ctrl,
dev_err(ctrl->device,
"Connect command failed: host path error\n");
break;
+ case NVME_SC_AUTH_REQUIRED:
+ dev_err(ctrl->device,
+ "Connect command failed: authentication required\n");
+ break;
default:
dev_err(ctrl->device,
"Connect command failed, error wo/DNR bit: %d\n",
--
2.29.2

2021-09-10 06:44:05

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 09/12] nvmet: Parse fabrics commands on all queues

Fabrics commands might be sent to all queues, not just the admin one.

Signed-off-by: Hannes Reinecke <[email protected]>
---
drivers/nvme/target/core.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index b8425fa34300..6e253c3c5e0f 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -943,6 +943,8 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
if (unlikely(!req->sq->ctrl))
/* will return an error for any non-connect command: */
status = nvmet_parse_connect_cmd(req);
+ else if (nvme_is_fabrics(req->cmd))
+ status = nvmet_parse_fabrics_cmd(req);
else if (likely(req->sq->qid != 0))
status = nvmet_parse_io_cmd(req);
else
--
2.29.2

2021-09-10 06:44:05

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 02/12] crypto: add crypto_has_kpp()

Add helper function to determine if a given key-agreement protocol primitive is supported.

Signed-off-by: Hannes Reinecke <[email protected]>
---
crypto/kpp.c | 6 ++++++
include/crypto/kpp.h | 2 ++
2 files changed, 8 insertions(+)

diff --git a/crypto/kpp.c b/crypto/kpp.c
index 313b2c699963..416e8a1a03ee 100644
--- a/crypto/kpp.c
+++ b/crypto/kpp.c
@@ -87,6 +87,12 @@ struct crypto_kpp *crypto_alloc_kpp(const char *alg_name, u32 type, u32 mask)
}
EXPORT_SYMBOL_GPL(crypto_alloc_kpp);

+int crypto_has_kpp(const char *alg_name, u32 type, u32 mask)
+{
+ return crypto_type_has_alg(alg_name, &crypto_kpp_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_has_kpp);
+
static void kpp_prepare_alg(struct kpp_alg *alg)
{
struct crypto_alg *base = &alg->base;
diff --git a/include/crypto/kpp.h b/include/crypto/kpp.h
index cccceadc164b..24d01e9877c1 100644
--- a/include/crypto/kpp.h
+++ b/include/crypto/kpp.h
@@ -104,6 +104,8 @@ struct kpp_alg {
*/
struct crypto_kpp *crypto_alloc_kpp(const char *alg_name, u32 type, u32 mask);

+int crypto_has_kpp(const char *alg_name, u32 type, u32 mask);
+
static inline struct crypto_tfm *crypto_kpp_tfm(struct crypto_kpp *tfm)
{
return &tfm->base;
--
2.29.2

2021-09-10 06:44:05

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 03/12] crypto/ffdhe: Finite Field DH Ephemeral Parameters

Add helper functions to generaten Finite Field DH Ephemeral Parameters as
specified in RFC 7919.

Signed-off-by: Hannes Reinecke <[email protected]>
---
crypto/Kconfig | 8 +
crypto/Makefile | 1 +
crypto/ffdhe_helper.c | 880 +++++++++++++++++++++++++++++++++++++++++
include/crypto/ffdhe.h | 24 ++
4 files changed, 913 insertions(+)
create mode 100644 crypto/ffdhe_helper.c
create mode 100644 include/crypto/ffdhe.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 536df4b6b825..2178649c8128 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -231,6 +231,14 @@ config CRYPTO_DH
help
Generic implementation of the Diffie-Hellman algorithm.

+config CRYPTO_FFDHE
+ tristate "Finite Field DH (RFC 7919) ephemeral parameters"
+ select CRYPTO_DH
+ select CRYPTO_KPP
+ select CRYPTO_RNG_DEFAULT
+ help
+ Generic implementation of the Finite Field DH algorithm
+
config CRYPTO_ECC
tristate

diff --git a/crypto/Makefile b/crypto/Makefile
index c633f15a0481..2b29a35f375a 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -176,6 +176,7 @@ obj-$(CONFIG_CRYPTO_OFB) += ofb.o
obj-$(CONFIG_CRYPTO_ECC) += ecc.o
obj-$(CONFIG_CRYPTO_ESSIV) += essiv.o
obj-$(CONFIG_CRYPTO_CURVE25519) += curve25519-generic.o
+obj-$(CONFIG_CRYPTO_FFDHE) += ffdhe_helper.o

ecdh_generic-y += ecdh.o
ecdh_generic-y += ecdh_helper.o
diff --git a/crypto/ffdhe_helper.c b/crypto/ffdhe_helper.c
new file mode 100644
index 000000000000..d7018bc3a8ec
--- /dev/null
+++ b/crypto/ffdhe_helper.c
@@ -0,0 +1,880 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Finite Field DH Ephemeral Parameters
+ * Values are taken from RFC 7919 Appendix A
+ *
+ * Copyright (c) 2021, Hannes Reinecke, SUSE Software Products
+ */
+
+#include <linux/module.h>
+#include <crypto/internal/kpp.h>
+#include <crypto/kpp.h>
+#include <crypto/dh.h>
+#include <crypto/ffdhe.h>
+#include <linux/mpi.h>
+/*
+ * ffdhe2048 generator (g), modulus (p) and group size (q)
+ */
+const u8 ffdhe2048_g[] = { 0x02 };
+
+const u8 ffdhe2048_p[] = {
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+ 0xad,0xf8,0x54,0x58,0xa2,0xbb,0x4a,0x9a,
+ 0xaf,0xdc,0x56,0x20,0x27,0x3d,0x3c,0xf1,
+ 0xd8,0xb9,0xc5,0x83,0xce,0x2d,0x36,0x95,
+ 0xa9,0xe1,0x36,0x41,0x14,0x64,0x33,0xfb,
+ 0xcc,0x93,0x9d,0xce,0x24,0x9b,0x3e,0xf9,
+ 0x7d,0x2f,0xe3,0x63,0x63,0x0c,0x75,0xd8,
+ 0xf6,0x81,0xb2,0x02,0xae,0xc4,0x61,0x7a,
+ 0xd3,0xdf,0x1e,0xd5,0xd5,0xfd,0x65,0x61,
+ 0x24,0x33,0xf5,0x1f,0x5f,0x06,0x6e,0xd0,
+ 0x85,0x63,0x65,0x55,0x3d,0xed,0x1a,0xf3,
+ 0xb5,0x57,0x13,0x5e,0x7f,0x57,0xc9,0x35,
+ 0x98,0x4f,0x0c,0x70,0xe0,0xe6,0x8b,0x77,
+ 0xe2,0xa6,0x89,0xda,0xf3,0xef,0xe8,0x72,
+ 0x1d,0xf1,0x58,0xa1,0x36,0xad,0xe7,0x35,
+ 0x30,0xac,0xca,0x4f,0x48,0x3a,0x79,0x7a,
+ 0xbc,0x0a,0xb1,0x82,0xb3,0x24,0xfb,0x61,
+ 0xd1,0x08,0xa9,0x4b,0xb2,0xc8,0xe3,0xfb,
+ 0xb9,0x6a,0xda,0xb7,0x60,0xd7,0xf4,0x68,
+ 0x1d,0x4f,0x42,0xa3,0xde,0x39,0x4d,0xf4,
+ 0xae,0x56,0xed,0xe7,0x63,0x72,0xbb,0x19,
+ 0x0b,0x07,0xa7,0xc8,0xee,0x0a,0x6d,0x70,
+ 0x9e,0x02,0xfc,0xe1,0xcd,0xf7,0xe2,0xec,
+ 0xc0,0x34,0x04,0xcd,0x28,0x34,0x2f,0x61,
+ 0x91,0x72,0xfe,0x9c,0xe9,0x85,0x83,0xff,
+ 0x8e,0x4f,0x12,0x32,0xee,0xf2,0x81,0x83,
+ 0xc3,0xfe,0x3b,0x1b,0x4c,0x6f,0xad,0x73,
+ 0x3b,0xb5,0xfc,0xbc,0x2e,0xc2,0x20,0x05,
+ 0xc5,0x8e,0xf1,0x83,0x7d,0x16,0x83,0xb2,
+ 0xc6,0xf3,0x4a,0x26,0xc1,0xb2,0xef,0xfa,
+ 0x88,0x6b,0x42,0x38,0x61,0x28,0x5c,0x97,
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+};
+
+const u8 ffdhe2048_q[] = {
+ 0x7f,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+ 0xd6,0xfc,0x2a,0x2c,0x51,0x5d,0xa5,0x4d,
+ 0x57,0xee,0x2b,0x10,0x13,0x9e,0x9e,0x78,
+ 0xec,0x5c,0xe2,0xc1,0xe7,0x16,0x9b,0x4a,
+ 0xd4,0xf0,0x9b,0x20,0x8a,0x32,0x19,0xfd,
+ 0xe6,0x49,0xce,0xe7,0x12,0x4d,0x9f,0x7c,
+ 0xbe,0x97,0xf1,0xb1,0xb1,0x86,0x3a,0xec,
+ 0x7b,0x40,0xd9,0x01,0x57,0x62,0x30,0xbd,
+ 0x69,0xef,0x8f,0x6a,0xea,0xfe,0xb2,0xb0,
+ 0x92,0x19,0xfa,0x8f,0xaf,0x83,0x37,0x68,
+ 0x42,0xb1,0xb2,0xaa,0x9e,0xf6,0x8d,0x79,
+ 0xda,0xab,0x89,0xaf,0x3f,0xab,0xe4,0x9a,
+ 0xcc,0x27,0x86,0x38,0x70,0x73,0x45,0xbb,
+ 0xf1,0x53,0x44,0xed,0x79,0xf7,0xf4,0x39,
+ 0x0e,0xf8,0xac,0x50,0x9b,0x56,0xf3,0x9a,
+ 0x98,0x56,0x65,0x27,0xa4,0x1d,0x3c,0xbd,
+ 0x5e,0x05,0x58,0xc1,0x59,0x92,0x7d,0xb0,
+ 0xe8,0x84,0x54,0xa5,0xd9,0x64,0x71,0xfd,
+ 0xdc,0xb5,0x6d,0x5b,0xb0,0x6b,0xfa,0x34,
+ 0x0e,0xa7,0xa1,0x51,0xef,0x1c,0xa6,0xfa,
+ 0x57,0x2b,0x76,0xf3,0xb1,0xb9,0x5d,0x8c,
+ 0x85,0x83,0xd3,0xe4,0x77,0x05,0x36,0xb8,
+ 0x4f,0x01,0x7e,0x70,0xe6,0xfb,0xf1,0x76,
+ 0x60,0x1a,0x02,0x66,0x94,0x1a,0x17,0xb0,
+ 0xc8,0xb9,0x7f,0x4e,0x74,0xc2,0xc1,0xff,
+ 0xc7,0x27,0x89,0x19,0x77,0x79,0x40,0xc1,
+ 0xe1,0xff,0x1d,0x8d,0xa6,0x37,0xd6,0xb9,
+ 0x9d,0xda,0xfe,0x5e,0x17,0x61,0x10,0x02,
+ 0xe2,0xc7,0x78,0xc1,0xbe,0x8b,0x41,0xd9,
+ 0x63,0x79,0xa5,0x13,0x60,0xd9,0x77,0xfd,
+ 0x44,0x35,0xa1,0x1c,0x30,0x94,0x2e,0x4b,
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+};
+
+/*
+ * ffdhe3072 generator (g), modulus (p) and group size (q)
+ */
+
+const u8 ffdhe3072_g[] = { 0x02 };
+
+const u8 ffdhe3072_p[] = {
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+ 0xad,0xf8,0x54,0x58,0xa2,0xbb,0x4a,0x9a,
+ 0xaf,0xdc,0x56,0x20,0x27,0x3d,0x3c,0xf1,
+ 0xd8,0xb9,0xc5,0x83,0xce,0x2d,0x36,0x95,
+ 0xa9,0xe1,0x36,0x41,0x14,0x64,0x33,0xfb,
+ 0xcc,0x93,0x9d,0xce,0x24,0x9b,0x3e,0xf9,
+ 0x7d,0x2f,0xe3,0x63,0x63,0x0c,0x75,0xd8,
+ 0xf6,0x81,0xb2,0x02,0xae,0xc4,0x61,0x7a,
+ 0xd3,0xdf,0x1e,0xd5,0xd5,0xfd,0x65,0x61,
+ 0x24,0x33,0xf5,0x1f,0x5f,0x06,0x6e,0xd0,
+ 0x85,0x63,0x65,0x55,0x3d,0xed,0x1a,0xf3,
+ 0xb5,0x57,0x13,0x5e,0x7f,0x57,0xc9,0x35,
+ 0x98,0x4f,0x0c,0x70,0xe0,0xe6,0x8b,0x77,
+ 0xe2,0xa6,0x89,0xda,0xf3,0xef,0xe8,0x72,
+ 0x1d,0xf1,0x58,0xa1,0x36,0xad,0xe7,0x35,
+ 0x30,0xac,0xca,0x4f,0x48,0x3a,0x79,0x7a,
+ 0xbc,0x0a,0xb1,0x82,0xb3,0x24,0xfb,0x61,
+ 0xd1,0x08,0xa9,0x4b,0xb2,0xc8,0xe3,0xfb,
+ 0xb9,0x6a,0xda,0xb7,0x60,0xd7,0xf4,0x68,
+ 0x1d,0x4f,0x42,0xa3,0xde,0x39,0x4d,0xf4,
+ 0xae,0x56,0xed,0xe7,0x63,0x72,0xbb,0x19,
+ 0x0b,0x07,0xa7,0xc8,0xee,0x0a,0x6d,0x70,
+ 0x9e,0x02,0xfc,0xe1,0xcd,0xf7,0xe2,0xec,
+ 0xc0,0x34,0x04,0xcd,0x28,0x34,0x2f,0x61,
+ 0x91,0x72,0xfe,0x9c,0xe9,0x85,0x83,0xff,
+ 0x8e,0x4f,0x12,0x32,0xee,0xf2,0x81,0x83,
+ 0xc3,0xfe,0x3b,0x1b,0x4c,0x6f,0xad,0x73,
+ 0x3b,0xb5,0xfc,0xbc,0x2e,0xc2,0x20,0x05,
+ 0xc5,0x8e,0xf1,0x83,0x7d,0x16,0x83,0xb2,
+ 0xc6,0xf3,0x4a,0x26,0xc1,0xb2,0xef,0xfa,
+ 0x88,0x6b,0x42,0x38,0x61,0x1f,0xcf,0xdc,
+ 0xde,0x35,0x5b,0x3b,0x65,0x19,0x03,0x5b,
+ 0xbc,0x34,0xf4,0xde,0xf9,0x9c,0x02,0x38,
+ 0x61,0xb4,0x6f,0xc9,0xd6,0xe6,0xc9,0x07,
+ 0x7a,0xd9,0x1d,0x26,0x91,0xf7,0xf7,0xee,
+ 0x59,0x8c,0xb0,0xfa,0xc1,0x86,0xd9,0x1c,
+ 0xae,0xfe,0x13,0x09,0x85,0x13,0x92,0x70,
+ 0xb4,0x13,0x0c,0x93,0xbc,0x43,0x79,0x44,
+ 0xf4,0xfd,0x44,0x52,0xe2,0xd7,0x4d,0xd3,
+ 0x64,0xf2,0xe2,0x1e,0x71,0xf5,0x4b,0xff,
+ 0x5c,0xae,0x82,0xab,0x9c,0x9d,0xf6,0x9e,
+ 0xe8,0x6d,0x2b,0xc5,0x22,0x36,0x3a,0x0d,
+ 0xab,0xc5,0x21,0x97,0x9b,0x0d,0xea,0xda,
+ 0x1d,0xbf,0x9a,0x42,0xd5,0xc4,0x48,0x4e,
+ 0x0a,0xbc,0xd0,0x6b,0xfa,0x53,0xdd,0xef,
+ 0x3c,0x1b,0x20,0xee,0x3f,0xd5,0x9d,0x7c,
+ 0x25,0xe4,0x1d,0x2b,0x66,0xc6,0x2e,0x37,
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+};
+
+const u8 ffdhe3072_q[] = {
+ 0x7f,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+ 0xd6,0xfc,0x2a,0x2c,0x51,0x5d,0xa5,0x4d,
+ 0x57,0xee,0x2b,0x10,0x13,0x9e,0x9e,0x78,
+ 0xec,0x5c,0xe2,0xc1,0xe7,0x16,0x9b,0x4a,
+ 0xd4,0xf0,0x9b,0x20,0x8a,0x32,0x19,0xfd,
+ 0xe6,0x49,0xce,0xe7,0x12,0x4d,0x9f,0x7c,
+ 0xbe,0x97,0xf1,0xb1,0xb1,0x86,0x3a,0xec,
+ 0x7b,0x40,0xd9,0x01,0x57,0x62,0x30,0xbd,
+ 0x69,0xef,0x8f,0x6a,0xea,0xfe,0xb2,0xb0,
+ 0x92,0x19,0xfa,0x8f,0xaf,0x83,0x37,0x68,
+ 0x42,0xb1,0xb2,0xaa,0x9e,0xf6,0x8d,0x79,
+ 0xda,0xab,0x89,0xaf,0x3f,0xab,0xe4,0x9a,
+ 0xcc,0x27,0x86,0x38,0x70,0x73,0x45,0xbb,
+ 0xf1,0x53,0x44,0xed,0x79,0xf7,0xf4,0x39,
+ 0x0e,0xf8,0xac,0x50,0x9b,0x56,0xf3,0x9a,
+ 0x98,0x56,0x65,0x27,0xa4,0x1d,0x3c,0xbd,
+ 0x5e,0x05,0x58,0xc1,0x59,0x92,0x7d,0xb0,
+ 0xe8,0x84,0x54,0xa5,0xd9,0x64,0x71,0xfd,
+ 0xdc,0xb5,0x6d,0x5b,0xb0,0x6b,0xfa,0x34,
+ 0x0e,0xa7,0xa1,0x51,0xef,0x1c,0xa6,0xfa,
+ 0x57,0x2b,0x76,0xf3,0xb1,0xb9,0x5d,0x8c,
+ 0x85,0x83,0xd3,0xe4,0x77,0x05,0x36,0xb8,
+ 0x4f,0x01,0x7e,0x70,0xe6,0xfb,0xf1,0x76,
+ 0x60,0x1a,0x02,0x66,0x94,0x1a,0x17,0xb0,
+ 0xc8,0xb9,0x7f,0x4e,0x74,0xc2,0xc1,0xff,
+ 0xc7,0x27,0x89,0x19,0x77,0x79,0x40,0xc1,
+ 0xe1,0xff,0x1d,0x8d,0xa6,0x37,0xd6,0xb9,
+ 0x9d,0xda,0xfe,0x5e,0x17,0x61,0x10,0x02,
+ 0xe2,0xc7,0x78,0xc1,0xbe,0x8b,0x41,0xd9,
+ 0x63,0x79,0xa5,0x13,0x60,0xd9,0x77,0xfd,
+ 0x44,0x35,0xa1,0x1c,0x30,0x8f,0xe7,0xee,
+ 0x6f,0x1a,0xad,0x9d,0xb2,0x8c,0x81,0xad,
+ 0xde,0x1a,0x7a,0x6f,0x7c,0xce,0x01,0x1c,
+ 0x30,0xda,0x37,0xe4,0xeb,0x73,0x64,0x83,
+ 0xbd,0x6c,0x8e,0x93,0x48,0xfb,0xfb,0xf7,
+ 0x2c,0xc6,0x58,0x7d,0x60,0xc3,0x6c,0x8e,
+ 0x57,0x7f,0x09,0x84,0xc2,0x89,0xc9,0x38,
+ 0x5a,0x09,0x86,0x49,0xde,0x21,0xbc,0xa2,
+ 0x7a,0x7e,0xa2,0x29,0x71,0x6b,0xa6,0xe9,
+ 0xb2,0x79,0x71,0x0f,0x38,0xfa,0xa5,0xff,
+ 0xae,0x57,0x41,0x55,0xce,0x4e,0xfb,0x4f,
+ 0x74,0x36,0x95,0xe2,0x91,0x1b,0x1d,0x06,
+ 0xd5,0xe2,0x90,0xcb,0xcd,0x86,0xf5,0x6d,
+ 0x0e,0xdf,0xcd,0x21,0x6a,0xe2,0x24,0x27,
+ 0x05,0x5e,0x68,0x35,0xfd,0x29,0xee,0xf7,
+ 0x9e,0x0d,0x90,0x77,0x1f,0xea,0xce,0xbe,
+ 0x12,0xf2,0x0e,0x95,0xb3,0x63,0x17,0x1b,
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+};
+
+/*
+ * ffdhe4096 generator (g), modulus (p) and group size (q)
+ */
+
+const u8 ffdhe4096_g[] = { 0x02 };
+
+const u8 ffdhe4096_p[] = {
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+ 0xad,0xf8,0x54,0x58,0xa2,0xbb,0x4a,0x9a,
+ 0xaf,0xdc,0x56,0x20,0x27,0x3d,0x3c,0xf1,
+ 0xd8,0xb9,0xc5,0x83,0xce,0x2d,0x36,0x95,
+ 0xa9,0xe1,0x36,0x41,0x14,0x64,0x33,0xfb,
+ 0xcc,0x93,0x9d,0xce,0x24,0x9b,0x3e,0xf9,
+ 0x7d,0x2f,0xe3,0x63,0x63,0x0c,0x75,0xd8,
+ 0xf6,0x81,0xb2,0x02,0xae,0xc4,0x61,0x7a,
+ 0xd3,0xdf,0x1e,0xd5,0xd5,0xfd,0x65,0x61,
+ 0x24,0x33,0xf5,0x1f,0x5f,0x06,0x6e,0xd0,
+ 0x85,0x63,0x65,0x55,0x3d,0xed,0x1a,0xf3,
+ 0xb5,0x57,0x13,0x5e,0x7f,0x57,0xc9,0x35,
+ 0x98,0x4f,0x0c,0x70,0xe0,0xe6,0x8b,0x77,
+ 0xe2,0xa6,0x89,0xda,0xf3,0xef,0xe8,0x72,
+ 0x1d,0xf1,0x58,0xa1,0x36,0xad,0xe7,0x35,
+ 0x30,0xac,0xca,0x4f,0x48,0x3a,0x79,0x7a,
+ 0xbc,0x0a,0xb1,0x82,0xb3,0x24,0xfb,0x61,
+ 0xd1,0x08,0xa9,0x4b,0xb2,0xc8,0xe3,0xfb,
+ 0xb9,0x6a,0xda,0xb7,0x60,0xd7,0xf4,0x68,
+ 0x1d,0x4f,0x42,0xa3,0xde,0x39,0x4d,0xf4,
+ 0xae,0x56,0xed,0xe7,0x63,0x72,0xbb,0x19,
+ 0x0b,0x07,0xa7,0xc8,0xee,0x0a,0x6d,0x70,
+ 0x9e,0x02,0xfc,0xe1,0xcd,0xf7,0xe2,0xec,
+ 0xc0,0x34,0x04,0xcd,0x28,0x34,0x2f,0x61,
+ 0x91,0x72,0xfe,0x9c,0xe9,0x85,0x83,0xff,
+ 0x8e,0x4f,0x12,0x32,0xee,0xf2,0x81,0x83,
+ 0xc3,0xfe,0x3b,0x1b,0x4c,0x6f,0xad,0x73,
+ 0x3b,0xb5,0xfc,0xbc,0x2e,0xc2,0x20,0x05,
+ 0xc5,0x8e,0xf1,0x83,0x7d,0x16,0x83,0xb2,
+ 0xc6,0xf3,0x4a,0x26,0xc1,0xb2,0xef,0xfa,
+ 0x88,0x6b,0x42,0x38,0x61,0x1f,0xcf,0xdc,
+ 0xde,0x35,0x5b,0x3b,0x65,0x19,0x03,0x5b,
+ 0xbc,0x34,0xf4,0xde,0xf9,0x9c,0x02,0x38,
+ 0x61,0xb4,0x6f,0xc9,0xd6,0xe6,0xc9,0x07,
+ 0x7a,0xd9,0x1d,0x26,0x91,0xf7,0xf7,0xee,
+ 0x59,0x8c,0xb0,0xfa,0xc1,0x86,0xd9,0x1c,
+ 0xae,0xfe,0x13,0x09,0x85,0x13,0x92,0x70,
+ 0xb4,0x13,0x0c,0x93,0xbc,0x43,0x79,0x44,
+ 0xf4,0xfd,0x44,0x52,0xe2,0xd7,0x4d,0xd3,
+ 0x64,0xf2,0xe2,0x1e,0x71,0xf5,0x4b,0xff,
+ 0x5c,0xae,0x82,0xab,0x9c,0x9d,0xf6,0x9e,
+ 0xe8,0x6d,0x2b,0xc5,0x22,0x36,0x3a,0x0d,
+ 0xab,0xc5,0x21,0x97,0x9b,0x0d,0xea,0xda,
+ 0x1d,0xbf,0x9a,0x42,0xd5,0xc4,0x48,0x4e,
+ 0x0a,0xbc,0xd0,0x6b,0xfa,0x53,0xdd,0xef,
+ 0x3c,0x1b,0x20,0xee,0x3f,0xd5,0x9d,0x7c,
+ 0x25,0xe4,0x1d,0x2b,0x66,0x9e,0x1e,0xf1,
+ 0x6e,0x6f,0x52,0xc3,0x16,0x4d,0xf4,0xfb,
+ 0x79,0x30,0xe9,0xe4,0xe5,0x88,0x57,0xb6,
+ 0xac,0x7d,0x5f,0x42,0xd6,0x9f,0x6d,0x18,
+ 0x77,0x63,0xcf,0x1d,0x55,0x03,0x40,0x04,
+ 0x87,0xf5,0x5b,0xa5,0x7e,0x31,0xcc,0x7a,
+ 0x71,0x35,0xc8,0x86,0xef,0xb4,0x31,0x8a,
+ 0xed,0x6a,0x1e,0x01,0x2d,0x9e,0x68,0x32,
+ 0xa9,0x07,0x60,0x0a,0x91,0x81,0x30,0xc4,
+ 0x6d,0xc7,0x78,0xf9,0x71,0xad,0x00,0x38,
+ 0x09,0x29,0x99,0xa3,0x33,0xcb,0x8b,0x7a,
+ 0x1a,0x1d,0xb9,0x3d,0x71,0x40,0x00,0x3c,
+ 0x2a,0x4e,0xce,0xa9,0xf9,0x8d,0x0a,0xcc,
+ 0x0a,0x82,0x91,0xcd,0xce,0xc9,0x7d,0xcf,
+ 0x8e,0xc9,0xb5,0x5a,0x7f,0x88,0xa4,0x6b,
+ 0x4d,0xb5,0xa8,0x51,0xf4,0x41,0x82,0xe1,
+ 0xc6,0x8a,0x00,0x7e,0x5e,0x65,0x5f,0x6a,
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+};
+
+const u8 ffdhe4096_q[] = {
+ 0x7f,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+ 0xd6,0xfc,0x2a,0x2c,0x51,0x5d,0xa5,0x4d,
+ 0x57,0xee,0x2b,0x10,0x13,0x9e,0x9e,0x78,
+ 0xec,0x5c,0xe2,0xc1,0xe7,0x16,0x9b,0x4a,
+ 0xd4,0xf0,0x9b,0x20,0x8a,0x32,0x19,0xfd,
+ 0xe6,0x49,0xce,0xe7,0x12,0x4d,0x9f,0x7c,
+ 0xbe,0x97,0xf1,0xb1,0xb1,0x86,0x3a,0xec,
+ 0x7b,0x40,0xd9,0x01,0x57,0x62,0x30,0xbd,
+ 0x69,0xef,0x8f,0x6a,0xea,0xfe,0xb2,0xb0,
+ 0x92,0x19,0xfa,0x8f,0xaf,0x83,0x37,0x68,
+ 0x42,0xb1,0xb2,0xaa,0x9e,0xf6,0x8d,0x79,
+ 0xda,0xab,0x89,0xaf,0x3f,0xab,0xe4,0x9a,
+ 0xcc,0x27,0x86,0x38,0x70,0x73,0x45,0xbb,
+ 0xf1,0x53,0x44,0xed,0x79,0xf7,0xf4,0x39,
+ 0x0e,0xf8,0xac,0x50,0x9b,0x56,0xf3,0x9a,
+ 0x98,0x56,0x65,0x27,0xa4,0x1d,0x3c,0xbd,
+ 0x5e,0x05,0x58,0xc1,0x59,0x92,0x7d,0xb0,
+ 0xe8,0x84,0x54,0xa5,0xd9,0x64,0x71,0xfd,
+ 0xdc,0xb5,0x6d,0x5b,0xb0,0x6b,0xfa,0x34,
+ 0x0e,0xa7,0xa1,0x51,0xef,0x1c,0xa6,0xfa,
+ 0x57,0x2b,0x76,0xf3,0xb1,0xb9,0x5d,0x8c,
+ 0x85,0x83,0xd3,0xe4,0x77,0x05,0x36,0xb8,
+ 0x4f,0x01,0x7e,0x70,0xe6,0xfb,0xf1,0x76,
+ 0x60,0x1a,0x02,0x66,0x94,0x1a,0x17,0xb0,
+ 0xc8,0xb9,0x7f,0x4e,0x74,0xc2,0xc1,0xff,
+ 0xc7,0x27,0x89,0x19,0x77,0x79,0x40,0xc1,
+ 0xe1,0xff,0x1d,0x8d,0xa6,0x37,0xd6,0xb9,
+ 0x9d,0xda,0xfe,0x5e,0x17,0x61,0x10,0x02,
+ 0xe2,0xc7,0x78,0xc1,0xbe,0x8b,0x41,0xd9,
+ 0x63,0x79,0xa5,0x13,0x60,0xd9,0x77,0xfd,
+ 0x44,0x35,0xa1,0x1c,0x30,0x8f,0xe7,0xee,
+ 0x6f,0x1a,0xad,0x9d,0xb2,0x8c,0x81,0xad,
+ 0xde,0x1a,0x7a,0x6f,0x7c,0xce,0x01,0x1c,
+ 0x30,0xda,0x37,0xe4,0xeb,0x73,0x64,0x83,
+ 0xbd,0x6c,0x8e,0x93,0x48,0xfb,0xfb,0xf7,
+ 0x2c,0xc6,0x58,0x7d,0x60,0xc3,0x6c,0x8e,
+ 0x57,0x7f,0x09,0x84,0xc2,0x89,0xc9,0x38,
+ 0x5a,0x09,0x86,0x49,0xde,0x21,0xbc,0xa2,
+ 0x7a,0x7e,0xa2,0x29,0x71,0x6b,0xa6,0xe9,
+ 0xb2,0x79,0x71,0x0f,0x38,0xfa,0xa5,0xff,
+ 0xae,0x57,0x41,0x55,0xce,0x4e,0xfb,0x4f,
+ 0x74,0x36,0x95,0xe2,0x91,0x1b,0x1d,0x06,
+ 0xd5,0xe2,0x90,0xcb,0xcd,0x86,0xf5,0x6d,
+ 0x0e,0xdf,0xcd,0x21,0x6a,0xe2,0x24,0x27,
+ 0x05,0x5e,0x68,0x35,0xfd,0x29,0xee,0xf7,
+ 0x9e,0x0d,0x90,0x77,0x1f,0xea,0xce,0xbe,
+ 0x12,0xf2,0x0e,0x95,0xb3,0x4f,0x0f,0x78,
+ 0xb7,0x37,0xa9,0x61,0x8b,0x26,0xfa,0x7d,
+ 0xbc,0x98,0x74,0xf2,0x72,0xc4,0x2b,0xdb,
+ 0x56,0x3e,0xaf,0xa1,0x6b,0x4f,0xb6,0x8c,
+ 0x3b,0xb1,0xe7,0x8e,0xaa,0x81,0xa0,0x02,
+ 0x43,0xfa,0xad,0xd2,0xbf,0x18,0xe6,0x3d,
+ 0x38,0x9a,0xe4,0x43,0x77,0xda,0x18,0xc5,
+ 0x76,0xb5,0x0f,0x00,0x96,0xcf,0x34,0x19,
+ 0x54,0x83,0xb0,0x05,0x48,0xc0,0x98,0x62,
+ 0x36,0xe3,0xbc,0x7c,0xb8,0xd6,0x80,0x1c,
+ 0x04,0x94,0xcc,0xd1,0x99,0xe5,0xc5,0xbd,
+ 0x0d,0x0e,0xdc,0x9e,0xb8,0xa0,0x00,0x1e,
+ 0x15,0x27,0x67,0x54,0xfc,0xc6,0x85,0x66,
+ 0x05,0x41,0x48,0xe6,0xe7,0x64,0xbe,0xe7,
+ 0xc7,0x64,0xda,0xad,0x3f,0xc4,0x52,0x35,
+ 0xa6,0xda,0xd4,0x28,0xfa,0x20,0xc1,0x70,
+ 0xe3,0x45,0x00,0x3f,0x2f,0x32,0xaf,0xb5,
+ 0x7f,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+};
+
+/*
+ * ffdhe6144 generator (g), modulus (p) and group size (q)
+ */
+
+const u8 ffdhe6144_g[] = { 0x02 };
+
+const u8 ffdhe6144_p[] = {
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+ 0xad,0xf8,0x54,0x58,0xa2,0xbb,0x4a,0x9a,
+ 0xaf,0xdc,0x56,0x20,0x27,0x3d,0x3c,0xf1,
+ 0xd8,0xb9,0xc5,0x83,0xce,0x2d,0x36,0x95,
+ 0xa9,0xe1,0x36,0x41,0x14,0x64,0x33,0xfb,
+ 0xcc,0x93,0x9d,0xce,0x24,0x9b,0x3e,0xf9,
+ 0x7d,0x2f,0xe3,0x63,0x63,0x0c,0x75,0xd8,
+ 0xf6,0x81,0xb2,0x02,0xae,0xc4,0x61,0x7a,
+ 0xd3,0xdf,0x1e,0xd5,0xd5,0xfd,0x65,0x61,
+ 0x24,0x33,0xf5,0x1f,0x5f,0x06,0x6e,0xd0,
+ 0x85,0x63,0x65,0x55,0x3d,0xed,0x1a,0xf3,
+ 0xb5,0x57,0x13,0x5e,0x7f,0x57,0xc9,0x35,
+ 0x98,0x4f,0x0c,0x70,0xe0,0xe6,0x8b,0x77,
+ 0xe2,0xa6,0x89,0xda,0xf3,0xef,0xe8,0x72,
+ 0x1d,0xf1,0x58,0xa1,0x36,0xad,0xe7,0x35,
+ 0x30,0xac,0xca,0x4f,0x48,0x3a,0x79,0x7a,
+ 0xbc,0x0a,0xb1,0x82,0xb3,0x24,0xfb,0x61,
+ 0xd1,0x08,0xa9,0x4b,0xb2,0xc8,0xe3,0xfb,
+ 0xb9,0x6a,0xda,0xb7,0x60,0xd7,0xf4,0x68,
+ 0x1d,0x4f,0x42,0xa3,0xde,0x39,0x4d,0xf4,
+ 0xae,0x56,0xed,0xe7,0x63,0x72,0xbb,0x19,
+ 0x0b,0x07,0xa7,0xc8,0xee,0x0a,0x6d,0x70,
+ 0x9e,0x02,0xfc,0xe1,0xcd,0xf7,0xe2,0xec,
+ 0xc0,0x34,0x04,0xcd,0x28,0x34,0x2f,0x61,
+ 0x91,0x72,0xfe,0x9c,0xe9,0x85,0x83,0xff,
+ 0x8e,0x4f,0x12,0x32,0xee,0xf2,0x81,0x83,
+ 0xc3,0xfe,0x3b,0x1b,0x4c,0x6f,0xad,0x73,
+ 0x3b,0xb5,0xfc,0xbc,0x2e,0xc2,0x20,0x05,
+ 0xc5,0x8e,0xf1,0x83,0x7d,0x16,0x83,0xb2,
+ 0xc6,0xf3,0x4a,0x26,0xc1,0xb2,0xef,0xfa,
+ 0x88,0x6b,0x42,0x38,0x61,0x1f,0xcf,0xdc,
+ 0xde,0x35,0x5b,0x3b,0x65,0x19,0x03,0x5b,
+ 0xbc,0x34,0xf4,0xde,0xf9,0x9c,0x02,0x38,
+ 0x61,0xb4,0x6f,0xc9,0xd6,0xe6,0xc9,0x07,
+ 0x7a,0xd9,0x1d,0x26,0x91,0xf7,0xf7,0xee,
+ 0x59,0x8c,0xb0,0xfa,0xc1,0x86,0xd9,0x1c,
+ 0xae,0xfe,0x13,0x09,0x85,0x13,0x92,0x70,
+ 0xb4,0x13,0x0c,0x93,0xbc,0x43,0x79,0x44,
+ 0xf4,0xfd,0x44,0x52,0xe2,0xd7,0x4d,0xd3,
+ 0x64,0xf2,0xe2,0x1e,0x71,0xf5,0x4b,0xff,
+ 0x5c,0xae,0x82,0xab,0x9c,0x9d,0xf6,0x9e,
+ 0xe8,0x6d,0x2b,0xc5,0x22,0x36,0x3a,0x0d,
+ 0xab,0xc5,0x21,0x97,0x9b,0x0d,0xea,0xda,
+ 0x1d,0xbf,0x9a,0x42,0xd5,0xc4,0x48,0x4e,
+ 0x0a,0xbc,0xd0,0x6b,0xfa,0x53,0xdd,0xef,
+ 0x3c,0x1b,0x20,0xee,0x3f,0xd5,0x9d,0x7c,
+ 0x25,0xe4,0x1d,0x2b,0x66,0x9e,0x1e,0xf1,
+ 0x6e,0x6f,0x52,0xc3,0x16,0x4d,0xf4,0xfb,
+ 0x79,0x30,0xe9,0xe4,0xe5,0x88,0x57,0xb6,
+ 0xac,0x7d,0x5f,0x42,0xd6,0x9f,0x6d,0x18,
+ 0x77,0x63,0xcf,0x1d,0x55,0x03,0x40,0x04,
+ 0x87,0xf5,0x5b,0xa5,0x7e,0x31,0xcc,0x7a,
+ 0x71,0x35,0xc8,0x86,0xef,0xb4,0x31,0x8a,
+ 0xed,0x6a,0x1e,0x01,0x2d,0x9e,0x68,0x32,
+ 0xa9,0x07,0x60,0x0a,0x91,0x81,0x30,0xc4,
+ 0x6d,0xc7,0x78,0xf9,0x71,0xad,0x00,0x38,
+ 0x09,0x29,0x99,0xa3,0x33,0xcb,0x8b,0x7a,
+ 0x1a,0x1d,0xb9,0x3d,0x71,0x40,0x00,0x3c,
+ 0x2a,0x4e,0xce,0xa9,0xf9,0x8d,0x0a,0xcc,
+ 0x0a,0x82,0x91,0xcd,0xce,0xc9,0x7d,0xcf,
+ 0x8e,0xc9,0xb5,0x5a,0x7f,0x88,0xa4,0x6b,
+ 0x4d,0xb5,0xa8,0x51,0xf4,0x41,0x82,0xe1,
+ 0xc6,0x8a,0x00,0x7e,0x5e,0x0d,0xd9,0x02,
+ 0x0b,0xfd,0x64,0xb6,0x45,0x03,0x6c,0x7a,
+ 0x4e,0x67,0x7d,0x2c,0x38,0x53,0x2a,0x3a,
+ 0x23,0xba,0x44,0x42,0xca,0xf5,0x3e,0xa6,
+ 0x3b,0xb4,0x54,0x32,0x9b,0x76,0x24,0xc8,
+ 0x91,0x7b,0xdd,0x64,0xb1,0xc0,0xfd,0x4c,
+ 0xb3,0x8e,0x8c,0x33,0x4c,0x70,0x1c,0x3a,
+ 0xcd,0xad,0x06,0x57,0xfc,0xcf,0xec,0x71,
+ 0x9b,0x1f,0x5c,0x3e,0x4e,0x46,0x04,0x1f,
+ 0x38,0x81,0x47,0xfb,0x4c,0xfd,0xb4,0x77,
+ 0xa5,0x24,0x71,0xf7,0xa9,0xa9,0x69,0x10,
+ 0xb8,0x55,0x32,0x2e,0xdb,0x63,0x40,0xd8,
+ 0xa0,0x0e,0xf0,0x92,0x35,0x05,0x11,0xe3,
+ 0x0a,0xbe,0xc1,0xff,0xf9,0xe3,0xa2,0x6e,
+ 0x7f,0xb2,0x9f,0x8c,0x18,0x30,0x23,0xc3,
+ 0x58,0x7e,0x38,0xda,0x00,0x77,0xd9,0xb4,
+ 0x76,0x3e,0x4e,0x4b,0x94,0xb2,0xbb,0xc1,
+ 0x94,0xc6,0x65,0x1e,0x77,0xca,0xf9,0x92,
+ 0xee,0xaa,0xc0,0x23,0x2a,0x28,0x1b,0xf6,
+ 0xb3,0xa7,0x39,0xc1,0x22,0x61,0x16,0x82,
+ 0x0a,0xe8,0xdb,0x58,0x47,0xa6,0x7c,0xbe,
+ 0xf9,0xc9,0x09,0x1b,0x46,0x2d,0x53,0x8c,
+ 0xd7,0x2b,0x03,0x74,0x6a,0xe7,0x7f,0x5e,
+ 0x62,0x29,0x2c,0x31,0x15,0x62,0xa8,0x46,
+ 0x50,0x5d,0xc8,0x2d,0xb8,0x54,0x33,0x8a,
+ 0xe4,0x9f,0x52,0x35,0xc9,0x5b,0x91,0x17,
+ 0x8c,0xcf,0x2d,0xd5,0xca,0xce,0xf4,0x03,
+ 0xec,0x9d,0x18,0x10,0xc6,0x27,0x2b,0x04,
+ 0x5b,0x3b,0x71,0xf9,0xdc,0x6b,0x80,0xd6,
+ 0x3f,0xdd,0x4a,0x8e,0x9a,0xdb,0x1e,0x69,
+ 0x62,0xa6,0x95,0x26,0xd4,0x31,0x61,0xc1,
+ 0xa4,0x1d,0x57,0x0d,0x79,0x38,0xda,0xd4,
+ 0xa4,0x0e,0x32,0x9c,0xd0,0xe4,0x0e,0x65,
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+};
+
+const u8 ffdhe6144_q[] = {
+ 0x7f,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+ 0xd6,0xfc,0x2a,0x2c,0x51,0x5d,0xa5,0x4d,
+ 0x57,0xee,0x2b,0x10,0x13,0x9e,0x9e,0x78,
+ 0xec,0x5c,0xe2,0xc1,0xe7,0x16,0x9b,0x4a,
+ 0xd4,0xf0,0x9b,0x20,0x8a,0x32,0x19,0xfd,
+ 0xe6,0x49,0xce,0xe7,0x12,0x4d,0x9f,0x7c,
+ 0xbe,0x97,0xf1,0xb1,0xb1,0x86,0x3a,0xec,
+ 0x7b,0x40,0xd9,0x01,0x57,0x62,0x30,0xbd,
+ 0x69,0xef,0x8f,0x6a,0xea,0xfe,0xb2,0xb0,
+ 0x92,0x19,0xfa,0x8f,0xaf,0x83,0x37,0x68,
+ 0x42,0xb1,0xb2,0xaa,0x9e,0xf6,0x8d,0x79,
+ 0xda,0xab,0x89,0xaf,0x3f,0xab,0xe4,0x9a,
+ 0xcc,0x27,0x86,0x38,0x70,0x73,0x45,0xbb,
+ 0xf1,0x53,0x44,0xed,0x79,0xf7,0xf4,0x39,
+ 0x0e,0xf8,0xac,0x50,0x9b,0x56,0xf3,0x9a,
+ 0x98,0x56,0x65,0x27,0xa4,0x1d,0x3c,0xbd,
+ 0x5e,0x05,0x58,0xc1,0x59,0x92,0x7d,0xb0,
+ 0xe8,0x84,0x54,0xa5,0xd9,0x64,0x71,0xfd,
+ 0xdc,0xb5,0x6d,0x5b,0xb0,0x6b,0xfa,0x34,
+ 0x0e,0xa7,0xa1,0x51,0xef,0x1c,0xa6,0xfa,
+ 0x57,0x2b,0x76,0xf3,0xb1,0xb9,0x5d,0x8c,
+ 0x85,0x83,0xd3,0xe4,0x77,0x05,0x36,0xb8,
+ 0x4f,0x01,0x7e,0x70,0xe6,0xfb,0xf1,0x76,
+ 0x60,0x1a,0x02,0x66,0x94,0x1a,0x17,0xb0,
+ 0xc8,0xb9,0x7f,0x4e,0x74,0xc2,0xc1,0xff,
+ 0xc7,0x27,0x89,0x19,0x77,0x79,0x40,0xc1,
+ 0xe1,0xff,0x1d,0x8d,0xa6,0x37,0xd6,0xb9,
+ 0x9d,0xda,0xfe,0x5e,0x17,0x61,0x10,0x02,
+ 0xe2,0xc7,0x78,0xc1,0xbe,0x8b,0x41,0xd9,
+ 0x63,0x79,0xa5,0x13,0x60,0xd9,0x77,0xfd,
+ 0x44,0x35,0xa1,0x1c,0x30,0x8f,0xe7,0xee,
+ 0x6f,0x1a,0xad,0x9d,0xb2,0x8c,0x81,0xad,
+ 0xde,0x1a,0x7a,0x6f,0x7c,0xce,0x01,0x1c,
+ 0x30,0xda,0x37,0xe4,0xeb,0x73,0x64,0x83,
+ 0xbd,0x6c,0x8e,0x93,0x48,0xfb,0xfb,0xf7,
+ 0x2c,0xc6,0x58,0x7d,0x60,0xc3,0x6c,0x8e,
+ 0x57,0x7f,0x09,0x84,0xc2,0x89,0xc9,0x38,
+ 0x5a,0x09,0x86,0x49,0xde,0x21,0xbc,0xa2,
+ 0x7a,0x7e,0xa2,0x29,0x71,0x6b,0xa6,0xe9,
+ 0xb2,0x79,0x71,0x0f,0x38,0xfa,0xa5,0xff,
+ 0xae,0x57,0x41,0x55,0xce,0x4e,0xfb,0x4f,
+ 0x74,0x36,0x95,0xe2,0x91,0x1b,0x1d,0x06,
+ 0xd5,0xe2,0x90,0xcb,0xcd,0x86,0xf5,0x6d,
+ 0x0e,0xdf,0xcd,0x21,0x6a,0xe2,0x24,0x27,
+ 0x05,0x5e,0x68,0x35,0xfd,0x29,0xee,0xf7,
+ 0x9e,0x0d,0x90,0x77,0x1f,0xea,0xce,0xbe,
+ 0x12,0xf2,0x0e,0x95,0xb3,0x4f,0x0f,0x78,
+ 0xb7,0x37,0xa9,0x61,0x8b,0x26,0xfa,0x7d,
+ 0xbc,0x98,0x74,0xf2,0x72,0xc4,0x2b,0xdb,
+ 0x56,0x3e,0xaf,0xa1,0x6b,0x4f,0xb6,0x8c,
+ 0x3b,0xb1,0xe7,0x8e,0xaa,0x81,0xa0,0x02,
+ 0x43,0xfa,0xad,0xd2,0xbf,0x18,0xe6,0x3d,
+ 0x38,0x9a,0xe4,0x43,0x77,0xda,0x18,0xc5,
+ 0x76,0xb5,0x0f,0x00,0x96,0xcf,0x34,0x19,
+ 0x54,0x83,0xb0,0x05,0x48,0xc0,0x98,0x62,
+ 0x36,0xe3,0xbc,0x7c,0xb8,0xd6,0x80,0x1c,
+ 0x04,0x94,0xcc,0xd1,0x99,0xe5,0xc5,0xbd,
+ 0x0d,0x0e,0xdc,0x9e,0xb8,0xa0,0x00,0x1e,
+ 0x15,0x27,0x67,0x54,0xfc,0xc6,0x85,0x66,
+ 0x05,0x41,0x48,0xe6,0xe7,0x64,0xbe,0xe7,
+ 0xc7,0x64,0xda,0xad,0x3f,0xc4,0x52,0x35,
+ 0xa6,0xda,0xd4,0x28,0xfa,0x20,0xc1,0x70,
+ 0xe3,0x45,0x00,0x3f,0x2f,0x06,0xec,0x81,
+ 0x05,0xfe,0xb2,0x5b,0x22,0x81,0xb6,0x3d,
+ 0x27,0x33,0xbe,0x96,0x1c,0x29,0x95,0x1d,
+ 0x11,0xdd,0x22,0x21,0x65,0x7a,0x9f,0x53,
+ 0x1d,0xda,0x2a,0x19,0x4d,0xbb,0x12,0x64,
+ 0x48,0xbd,0xee,0xb2,0x58,0xe0,0x7e,0xa6,
+ 0x59,0xc7,0x46,0x19,0xa6,0x38,0x0e,0x1d,
+ 0x66,0xd6,0x83,0x2b,0xfe,0x67,0xf6,0x38,
+ 0xcd,0x8f,0xae,0x1f,0x27,0x23,0x02,0x0f,
+ 0x9c,0x40,0xa3,0xfd,0xa6,0x7e,0xda,0x3b,
+ 0xd2,0x92,0x38,0xfb,0xd4,0xd4,0xb4,0x88,
+ 0x5c,0x2a,0x99,0x17,0x6d,0xb1,0xa0,0x6c,
+ 0x50,0x07,0x78,0x49,0x1a,0x82,0x88,0xf1,
+ 0x85,0x5f,0x60,0xff,0xfc,0xf1,0xd1,0x37,
+ 0x3f,0xd9,0x4f,0xc6,0x0c,0x18,0x11,0xe1,
+ 0xac,0x3f,0x1c,0x6d,0x00,0x3b,0xec,0xda,
+ 0x3b,0x1f,0x27,0x25,0xca,0x59,0x5d,0xe0,
+ 0xca,0x63,0x32,0x8f,0x3b,0xe5,0x7c,0xc9,
+ 0x77,0x55,0x60,0x11,0x95,0x14,0x0d,0xfb,
+ 0x59,0xd3,0x9c,0xe0,0x91,0x30,0x8b,0x41,
+ 0x05,0x74,0x6d,0xac,0x23,0xd3,0x3e,0x5f,
+ 0x7c,0xe4,0x84,0x8d,0xa3,0x16,0xa9,0xc6,
+ 0x6b,0x95,0x81,0xba,0x35,0x73,0xbf,0xaf,
+ 0x31,0x14,0x96,0x18,0x8a,0xb1,0x54,0x23,
+ 0x28,0x2e,0xe4,0x16,0xdc,0x2a,0x19,0xc5,
+ 0x72,0x4f,0xa9,0x1a,0xe4,0xad,0xc8,0x8b,
+ 0xc6,0x67,0x96,0xea,0xe5,0x67,0x7a,0x01,
+ 0xf6,0x4e,0x8c,0x08,0x63,0x13,0x95,0x82,
+ 0x2d,0x9d,0xb8,0xfc,0xee,0x35,0xc0,0x6b,
+ 0x1f,0xee,0xa5,0x47,0x4d,0x6d,0x8f,0x34,
+ 0xb1,0x53,0x4a,0x93,0x6a,0x18,0xb0,0xe0,
+ 0xd2,0x0e,0xab,0x86,0xbc,0x9c,0x6d,0x6a,
+ 0x52,0x07,0x19,0x4e,0x68,0x72,0x07,0x32,
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+};
+
+/*
+ * ffdhe8192 generator (g), modulus (p) and group size (q)
+ */
+
+const u8 ffdhe8192_g[] = { 0x02 };
+
+const u8 ffdhe8192_p[] = {
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+ 0xad,0xf8,0x54,0x58,0xa2,0xbb,0x4a,0x9a,
+ 0xaf,0xdc,0x56,0x20,0x27,0x3d,0x3c,0xf1,
+ 0xd8,0xb9,0xc5,0x83,0xce,0x2d,0x36,0x95,
+ 0xa9,0xe1,0x36,0x41,0x14,0x64,0x33,0xfb,
+ 0xcc,0x93,0x9d,0xce,0x24,0x9b,0x3e,0xf9,
+ 0x7d,0x2f,0xe3,0x63,0x63,0x0c,0x75,0xd8,
+ 0xf6,0x81,0xb2,0x02,0xae,0xc4,0x61,0x7a,
+ 0xd3,0xdf,0x1e,0xd5,0xd5,0xfd,0x65,0x61,
+ 0x24,0x33,0xf5,0x1f,0x5f,0x06,0x6e,0xd0,
+ 0x85,0x63,0x65,0x55,0x3d,0xed,0x1a,0xf3,
+ 0xb5,0x57,0x13,0x5e,0x7f,0x57,0xc9,0x35,
+ 0x98,0x4f,0x0c,0x70,0xe0,0xe6,0x8b,0x77,
+ 0xe2,0xa6,0x89,0xda,0xf3,0xef,0xe8,0x72,
+ 0x1d,0xf1,0x58,0xa1,0x36,0xad,0xe7,0x35,
+ 0x30,0xac,0xca,0x4f,0x48,0x3a,0x79,0x7a,
+ 0xbc,0x0a,0xb1,0x82,0xb3,0x24,0xfb,0x61,
+ 0xd1,0x08,0xa9,0x4b,0xb2,0xc8,0xe3,0xfb,
+ 0xb9,0x6a,0xda,0xb7,0x60,0xd7,0xf4,0x68,
+ 0x1d,0x4f,0x42,0xa3,0xde,0x39,0x4d,0xf4,
+ 0xae,0x56,0xed,0xe7,0x63,0x72,0xbb,0x19,
+ 0x0b,0x07,0xa7,0xc8,0xee,0x0a,0x6d,0x70,
+ 0x9e,0x02,0xfc,0xe1,0xcd,0xf7,0xe2,0xec,
+ 0xc0,0x34,0x04,0xcd,0x28,0x34,0x2f,0x61,
+ 0x91,0x72,0xfe,0x9c,0xe9,0x85,0x83,0xff,
+ 0x8e,0x4f,0x12,0x32,0xee,0xf2,0x81,0x83,
+ 0xc3,0xfe,0x3b,0x1b,0x4c,0x6f,0xad,0x73,
+ 0x3b,0xb5,0xfc,0xbc,0x2e,0xc2,0x20,0x05,
+ 0xc5,0x8e,0xf1,0x83,0x7d,0x16,0x83,0xb2,
+ 0xc6,0xf3,0x4a,0x26,0xc1,0xb2,0xef,0xfa,
+ 0x88,0x6b,0x42,0x38,0x61,0x1f,0xcf,0xdc,
+ 0xde,0x35,0x5b,0x3b,0x65,0x19,0x03,0x5b,
+ 0xbc,0x34,0xf4,0xde,0xf9,0x9c,0x02,0x38,
+ 0x61,0xb4,0x6f,0xc9,0xd6,0xe6,0xc9,0x07,
+ 0x7a,0xd9,0x1d,0x26,0x91,0xf7,0xf7,0xee,
+ 0x59,0x8c,0xb0,0xfa,0xc1,0x86,0xd9,0x1c,
+ 0xae,0xfe,0x13,0x09,0x85,0x13,0x92,0x70,
+ 0xb4,0x13,0x0c,0x93,0xbc,0x43,0x79,0x44,
+ 0xf4,0xfd,0x44,0x52,0xe2,0xd7,0x4d,0xd3,
+ 0x64,0xf2,0xe2,0x1e,0x71,0xf5,0x4b,0xff,
+ 0x5c,0xae,0x82,0xab,0x9c,0x9d,0xf6,0x9e,
+ 0xe8,0x6d,0x2b,0xc5,0x22,0x36,0x3a,0x0d,
+ 0xab,0xc5,0x21,0x97,0x9b,0x0d,0xea,0xda,
+ 0x1d,0xbf,0x9a,0x42,0xd5,0xc4,0x48,0x4e,
+ 0x0a,0xbc,0xd0,0x6b,0xfa,0x53,0xdd,0xef,
+ 0x3c,0x1b,0x20,0xee,0x3f,0xd5,0x9d,0x7c,
+ 0x25,0xe4,0x1d,0x2b,0x66,0x9e,0x1e,0xf1,
+ 0x6e,0x6f,0x52,0xc3,0x16,0x4d,0xf4,0xfb,
+ 0x79,0x30,0xe9,0xe4,0xe5,0x88,0x57,0xb6,
+ 0xac,0x7d,0x5f,0x42,0xd6,0x9f,0x6d,0x18,
+ 0x77,0x63,0xcf,0x1d,0x55,0x03,0x40,0x04,
+ 0x87,0xf5,0x5b,0xa5,0x7e,0x31,0xcc,0x7a,
+ 0x71,0x35,0xc8,0x86,0xef,0xb4,0x31,0x8a,
+ 0xed,0x6a,0x1e,0x01,0x2d,0x9e,0x68,0x32,
+ 0xa9,0x07,0x60,0x0a,0x91,0x81,0x30,0xc4,
+ 0x6d,0xc7,0x78,0xf9,0x71,0xad,0x00,0x38,
+ 0x09,0x29,0x99,0xa3,0x33,0xcb,0x8b,0x7a,
+ 0x1a,0x1d,0xb9,0x3d,0x71,0x40,0x00,0x3c,
+ 0x2a,0x4e,0xce,0xa9,0xf9,0x8d,0x0a,0xcc,
+ 0x0a,0x82,0x91,0xcd,0xce,0xc9,0x7d,0xcf,
+ 0x8e,0xc9,0xb5,0x5a,0x7f,0x88,0xa4,0x6b,
+ 0x4d,0xb5,0xa8,0x51,0xf4,0x41,0x82,0xe1,
+ 0xc6,0x8a,0x00,0x7e,0x5e,0x0d,0xd9,0x02,
+ 0x0b,0xfd,0x64,0xb6,0x45,0x03,0x6c,0x7a,
+ 0x4e,0x67,0x7d,0x2c,0x38,0x53,0x2a,0x3a,
+ 0x23,0xba,0x44,0x42,0xca,0xf5,0x3e,0xa6,
+ 0x3b,0xb4,0x54,0x32,0x9b,0x76,0x24,0xc8,
+ 0x91,0x7b,0xdd,0x64,0xb1,0xc0,0xfd,0x4c,
+ 0xb3,0x8e,0x8c,0x33,0x4c,0x70,0x1c,0x3a,
+ 0xcd,0xad,0x06,0x57,0xfc,0xcf,0xec,0x71,
+ 0x9b,0x1f,0x5c,0x3e,0x4e,0x46,0x04,0x1f,
+ 0x38,0x81,0x47,0xfb,0x4c,0xfd,0xb4,0x77,
+ 0xa5,0x24,0x71,0xf7,0xa9,0xa9,0x69,0x10,
+ 0xb8,0x55,0x32,0x2e,0xdb,0x63,0x40,0xd8,
+ 0xa0,0x0e,0xf0,0x92,0x35,0x05,0x11,0xe3,
+ 0x0a,0xbe,0xc1,0xff,0xf9,0xe3,0xa2,0x6e,
+ 0x7f,0xb2,0x9f,0x8c,0x18,0x30,0x23,0xc3,
+ 0x58,0x7e,0x38,0xda,0x00,0x77,0xd9,0xb4,
+ 0x76,0x3e,0x4e,0x4b,0x94,0xb2,0xbb,0xc1,
+ 0x94,0xc6,0x65,0x1e,0x77,0xca,0xf9,0x92,
+ 0xee,0xaa,0xc0,0x23,0x2a,0x28,0x1b,0xf6,
+ 0xb3,0xa7,0x39,0xc1,0x22,0x61,0x16,0x82,
+ 0x0a,0xe8,0xdb,0x58,0x47,0xa6,0x7c,0xbe,
+ 0xf9,0xc9,0x09,0x1b,0x46,0x2d,0x53,0x8c,
+ 0xd7,0x2b,0x03,0x74,0x6a,0xe7,0x7f,0x5e,
+ 0x62,0x29,0x2c,0x31,0x15,0x62,0xa8,0x46,
+ 0x50,0x5d,0xc8,0x2d,0xb8,0x54,0x33,0x8a,
+ 0xe4,0x9f,0x52,0x35,0xc9,0x5b,0x91,0x17,
+ 0x8c,0xcf,0x2d,0xd5,0xca,0xce,0xf4,0x03,
+ 0xec,0x9d,0x18,0x10,0xc6,0x27,0x2b,0x04,
+ 0x5b,0x3b,0x71,0xf9,0xdc,0x6b,0x80,0xd6,
+ 0x3f,0xdd,0x4a,0x8e,0x9a,0xdb,0x1e,0x69,
+ 0x62,0xa6,0x95,0x26,0xd4,0x31,0x61,0xc1,
+ 0xa4,0x1d,0x57,0x0d,0x79,0x38,0xda,0xd4,
+ 0xa4,0x0e,0x32,0x9c,0xcf,0xf4,0x6a,0xaa,
+ 0x36,0xad,0x00,0x4c,0xf6,0x00,0xc8,0x38,
+ 0x1e,0x42,0x5a,0x31,0xd9,0x51,0xae,0x64,
+ 0xfd,0xb2,0x3f,0xce,0xc9,0x50,0x9d,0x43,
+ 0x68,0x7f,0xeb,0x69,0xed,0xd1,0xcc,0x5e,
+ 0x0b,0x8c,0xc3,0xbd,0xf6,0x4b,0x10,0xef,
+ 0x86,0xb6,0x31,0x42,0xa3,0xab,0x88,0x29,
+ 0x55,0x5b,0x2f,0x74,0x7c,0x93,0x26,0x65,
+ 0xcb,0x2c,0x0f,0x1c,0xc0,0x1b,0xd7,0x02,
+ 0x29,0x38,0x88,0x39,0xd2,0xaf,0x05,0xe4,
+ 0x54,0x50,0x4a,0xc7,0x8b,0x75,0x82,0x82,
+ 0x28,0x46,0xc0,0xba,0x35,0xc3,0x5f,0x5c,
+ 0x59,0x16,0x0c,0xc0,0x46,0xfd,0x82,0x51,
+ 0x54,0x1f,0xc6,0x8c,0x9c,0x86,0xb0,0x22,
+ 0xbb,0x70,0x99,0x87,0x6a,0x46,0x0e,0x74,
+ 0x51,0xa8,0xa9,0x31,0x09,0x70,0x3f,0xee,
+ 0x1c,0x21,0x7e,0x6c,0x38,0x26,0xe5,0x2c,
+ 0x51,0xaa,0x69,0x1e,0x0e,0x42,0x3c,0xfc,
+ 0x99,0xe9,0xe3,0x16,0x50,0xc1,0x21,0x7b,
+ 0x62,0x48,0x16,0xcd,0xad,0x9a,0x95,0xf9,
+ 0xd5,0xb8,0x01,0x94,0x88,0xd9,0xc0,0xa0,
+ 0xa1,0xfe,0x30,0x75,0xa5,0x77,0xe2,0x31,
+ 0x83,0xf8,0x1d,0x4a,0x3f,0x2f,0xa4,0x57,
+ 0x1e,0xfc,0x8c,0xe0,0xba,0x8a,0x4f,0xe8,
+ 0xb6,0x85,0x5d,0xfe,0x72,0xb0,0xa6,0x6e,
+ 0xde,0xd2,0xfb,0xab,0xfb,0xe5,0x8a,0x30,
+ 0xfa,0xfa,0xbe,0x1c,0x5d,0x71,0xa8,0x7e,
+ 0x2f,0x74,0x1e,0xf8,0xc1,0xfe,0x86,0xfe,
+ 0xa6,0xbb,0xfd,0xe5,0x30,0x67,0x7f,0x0d,
+ 0x97,0xd1,0x1d,0x49,0xf7,0xa8,0x44,0x3d,
+ 0x08,0x22,0xe5,0x06,0xa9,0xf4,0x61,0x4e,
+ 0x01,0x1e,0x2a,0x94,0x83,0x8f,0xf8,0x8c,
+ 0xd6,0x8c,0x8b,0xb7,0xc5,0xc6,0x42,0x4c,
+ 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+};
+
+const u8 ffdhe8192_q[] = {
+ 0x7f,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+ 0xd6,0xfc,0x2a,0x2c,0x51,0x5d,0xa5,0x4d,
+ 0x57,0xee,0x2b,0x10,0x13,0x9e,0x9e,0x78,
+ 0xec,0x5c,0xe2,0xc1,0xe7,0x16,0x9b,0x4a,
+ 0xd4,0xf0,0x9b,0x20,0x8a,0x32,0x19,0xfd,
+ 0xe6,0x49,0xce,0xe7,0x12,0x4d,0x9f,0x7c,
+ 0xbe,0x97,0xf1,0xb1,0xb1,0x86,0x3a,0xec,
+ 0x7b,0x40,0xd9,0x01,0x57,0x62,0x30,0xbd,
+ 0x69,0xef,0x8f,0x6a,0xea,0xfe,0xb2,0xb0,
+ 0x92,0x19,0xfa,0x8f,0xaf,0x83,0x37,0x68,
+ 0x42,0xb1,0xb2,0xaa,0x9e,0xf6,0x8d,0x79,
+ 0xda,0xab,0x89,0xaf,0x3f,0xab,0xe4,0x9a,
+ 0xcc,0x27,0x86,0x38,0x70,0x73,0x45,0xbb,
+ 0xf1,0x53,0x44,0xed,0x79,0xf7,0xf4,0x39,
+ 0x0e,0xf8,0xac,0x50,0x9b,0x56,0xf3,0x9a,
+ 0x98,0x56,0x65,0x27,0xa4,0x1d,0x3c,0xbd,
+ 0x5e,0x05,0x58,0xc1,0x59,0x92,0x7d,0xb0,
+ 0xe8,0x84,0x54,0xa5,0xd9,0x64,0x71,0xfd,
+ 0xdc,0xb5,0x6d,0x5b,0xb0,0x6b,0xfa,0x34,
+ 0x0e,0xa7,0xa1,0x51,0xef,0x1c,0xa6,0xfa,
+ 0x57,0x2b,0x76,0xf3,0xb1,0xb9,0x5d,0x8c,
+ 0x85,0x83,0xd3,0xe4,0x77,0x05,0x36,0xb8,
+ 0x4f,0x01,0x7e,0x70,0xe6,0xfb,0xf1,0x76,
+ 0x60,0x1a,0x02,0x66,0x94,0x1a,0x17,0xb0,
+ 0xc8,0xb9,0x7f,0x4e,0x74,0xc2,0xc1,0xff,
+ 0xc7,0x27,0x89,0x19,0x77,0x79,0x40,0xc1,
+ 0xe1,0xff,0x1d,0x8d,0xa6,0x37,0xd6,0xb9,
+ 0x9d,0xda,0xfe,0x5e,0x17,0x61,0x10,0x02,
+ 0xe2,0xc7,0x78,0xc1,0xbe,0x8b,0x41,0xd9,
+ 0x63,0x79,0xa5,0x13,0x60,0xd9,0x77,0xfd,
+ 0x44,0x35,0xa1,0x1c,0x30,0x8f,0xe7,0xee,
+ 0x6f,0x1a,0xad,0x9d,0xb2,0x8c,0x81,0xad,
+ 0xde,0x1a,0x7a,0x6f,0x7c,0xce,0x01,0x1c,
+ 0x30,0xda,0x37,0xe4,0xeb,0x73,0x64,0x83,
+ 0xbd,0x6c,0x8e,0x93,0x48,0xfb,0xfb,0xf7,
+ 0x2c,0xc6,0x58,0x7d,0x60,0xc3,0x6c,0x8e,
+ 0x57,0x7f,0x09,0x84,0xc2,0x89,0xc9,0x38,
+ 0x5a,0x09,0x86,0x49,0xde,0x21,0xbc,0xa2,
+ 0x7a,0x7e,0xa2,0x29,0x71,0x6b,0xa6,0xe9,
+ 0xb2,0x79,0x71,0x0f,0x38,0xfa,0xa5,0xff,
+ 0xae,0x57,0x41,0x55,0xce,0x4e,0xfb,0x4f,
+ 0x74,0x36,0x95,0xe2,0x91,0x1b,0x1d,0x06,
+ 0xd5,0xe2,0x90,0xcb,0xcd,0x86,0xf5,0x6d,
+ 0x0e,0xdf,0xcd,0x21,0x6a,0xe2,0x24,0x27,
+ 0x05,0x5e,0x68,0x35,0xfd,0x29,0xee,0xf7,
+ 0x9e,0x0d,0x90,0x77,0x1f,0xea,0xce,0xbe,
+ 0x12,0xf2,0x0e,0x95,0xb3,0x4f,0x0f,0x78,
+ 0xb7,0x37,0xa9,0x61,0x8b,0x26,0xfa,0x7d,
+ 0xbc,0x98,0x74,0xf2,0x72,0xc4,0x2b,0xdb,
+ 0x56,0x3e,0xaf,0xa1,0x6b,0x4f,0xb6,0x8c,
+ 0x3b,0xb1,0xe7,0x8e,0xaa,0x81,0xa0,0x02,
+ 0x43,0xfa,0xad,0xd2,0xbf,0x18,0xe6,0x3d,
+ 0x38,0x9a,0xe4,0x43,0x77,0xda,0x18,0xc5,
+ 0x76,0xb5,0x0f,0x00,0x96,0xcf,0x34,0x19,
+ 0x54,0x83,0xb0,0x05,0x48,0xc0,0x98,0x62,
+ 0x36,0xe3,0xbc,0x7c,0xb8,0xd6,0x80,0x1c,
+ 0x04,0x94,0xcc,0xd1,0x99,0xe5,0xc5,0xbd,
+ 0x0d,0x0e,0xdc,0x9e,0xb8,0xa0,0x00,0x1e,
+ 0x15,0x27,0x67,0x54,0xfc,0xc6,0x85,0x66,
+ 0x05,0x41,0x48,0xe6,0xe7,0x64,0xbe,0xe7,
+ 0xc7,0x64,0xda,0xad,0x3f,0xc4,0x52,0x35,
+ 0xa6,0xda,0xd4,0x28,0xfa,0x20,0xc1,0x70,
+ 0xe3,0x45,0x00,0x3f,0x2f,0x06,0xec,0x81,
+ 0x05,0xfe,0xb2,0x5b,0x22,0x81,0xb6,0x3d,
+ 0x27,0x33,0xbe,0x96,0x1c,0x29,0x95,0x1d,
+ 0x11,0xdd,0x22,0x21,0x65,0x7a,0x9f,0x53,
+ 0x1d,0xda,0x2a,0x19,0x4d,0xbb,0x12,0x64,
+ 0x48,0xbd,0xee,0xb2,0x58,0xe0,0x7e,0xa6,
+ 0x59,0xc7,0x46,0x19,0xa6,0x38,0x0e,0x1d,
+ 0x66,0xd6,0x83,0x2b,0xfe,0x67,0xf6,0x38,
+ 0xcd,0x8f,0xae,0x1f,0x27,0x23,0x02,0x0f,
+ 0x9c,0x40,0xa3,0xfd,0xa6,0x7e,0xda,0x3b,
+ 0xd2,0x92,0x38,0xfb,0xd4,0xd4,0xb4,0x88,
+ 0x5c,0x2a,0x99,0x17,0x6d,0xb1,0xa0,0x6c,
+ 0x50,0x07,0x78,0x49,0x1a,0x82,0x88,0xf1,
+ 0x85,0x5f,0x60,0xff,0xfc,0xf1,0xd1,0x37,
+ 0x3f,0xd9,0x4f,0xc6,0x0c,0x18,0x11,0xe1,
+ 0xac,0x3f,0x1c,0x6d,0x00,0x3b,0xec,0xda,
+ 0x3b,0x1f,0x27,0x25,0xca,0x59,0x5d,0xe0,
+ 0xca,0x63,0x32,0x8f,0x3b,0xe5,0x7c,0xc9,
+ 0x77,0x55,0x60,0x11,0x95,0x14,0x0d,0xfb,
+ 0x59,0xd3,0x9c,0xe0,0x91,0x30,0x8b,0x41,
+ 0x05,0x74,0x6d,0xac,0x23,0xd3,0x3e,0x5f,
+ 0x7c,0xe4,0x84,0x8d,0xa3,0x16,0xa9,0xc6,
+ 0x6b,0x95,0x81,0xba,0x35,0x73,0xbf,0xaf,
+ 0x31,0x14,0x96,0x18,0x8a,0xb1,0x54,0x23,
+ 0x28,0x2e,0xe4,0x16,0xdc,0x2a,0x19,0xc5,
+ 0x72,0x4f,0xa9,0x1a,0xe4,0xad,0xc8,0x8b,
+ 0xc6,0x67,0x96,0xea,0xe5,0x67,0x7a,0x01,
+ 0xf6,0x4e,0x8c,0x08,0x63,0x13,0x95,0x82,
+ 0x2d,0x9d,0xb8,0xfc,0xee,0x35,0xc0,0x6b,
+ 0x1f,0xee,0xa5,0x47,0x4d,0x6d,0x8f,0x34,
+ 0xb1,0x53,0x4a,0x93,0x6a,0x18,0xb0,0xe0,
+ 0xd2,0x0e,0xab,0x86,0xbc,0x9c,0x6d,0x6a,
+ 0x52,0x07,0x19,0x4e,0x67,0xfa,0x35,0x55,
+ 0x1b,0x56,0x80,0x26,0x7b,0x00,0x64,0x1c,
+ 0x0f,0x21,0x2d,0x18,0xec,0xa8,0xd7,0x32,
+ 0x7e,0xd9,0x1f,0xe7,0x64,0xa8,0x4e,0xa1,
+ 0xb4,0x3f,0xf5,0xb4,0xf6,0xe8,0xe6,0x2f,
+ 0x05,0xc6,0x61,0xde,0xfb,0x25,0x88,0x77,
+ 0xc3,0x5b,0x18,0xa1,0x51,0xd5,0xc4,0x14,
+ 0xaa,0xad,0x97,0xba,0x3e,0x49,0x93,0x32,
+ 0xe5,0x96,0x07,0x8e,0x60,0x0d,0xeb,0x81,
+ 0x14,0x9c,0x44,0x1c,0xe9,0x57,0x82,0xf2,
+ 0x2a,0x28,0x25,0x63,0xc5,0xba,0xc1,0x41,
+ 0x14,0x23,0x60,0x5d,0x1a,0xe1,0xaf,0xae,
+ 0x2c,0x8b,0x06,0x60,0x23,0x7e,0xc1,0x28,
+ 0xaa,0x0f,0xe3,0x46,0x4e,0x43,0x58,0x11,
+ 0x5d,0xb8,0x4c,0xc3,0xb5,0x23,0x07,0x3a,
+ 0x28,0xd4,0x54,0x98,0x84,0xb8,0x1f,0xf7,
+ 0x0e,0x10,0xbf,0x36,0x1c,0x13,0x72,0x96,
+ 0x28,0xd5,0x34,0x8f,0x07,0x21,0x1e,0x7e,
+ 0x4c,0xf4,0xf1,0x8b,0x28,0x60,0x90,0xbd,
+ 0xb1,0x24,0x0b,0x66,0xd6,0xcd,0x4a,0xfc,
+ 0xea,0xdc,0x00,0xca,0x44,0x6c,0xe0,0x50,
+ 0x50,0xff,0x18,0x3a,0xd2,0xbb,0xf1,0x18,
+ 0xc1,0xfc,0x0e,0xa5,0x1f,0x97,0xd2,0x2b,
+ 0x8f,0x7e,0x46,0x70,0x5d,0x45,0x27,0xf4,
+ 0x5b,0x42,0xae,0xff,0x39,0x58,0x53,0x37,
+ 0x6f,0x69,0x7d,0xd5,0xfd,0xf2,0xc5,0x18,
+ 0x7d,0x7d,0x5f,0x0e,0x2e,0xb8,0xd4,0x3f,
+ 0x17,0xba,0x0f,0x7c,0x60,0xff,0x43,0x7f,
+ 0x53,0x5d,0xfe,0xf2,0x98,0x33,0xbf,0x86,
+ 0xcb,0xe8,0x8e,0xa4,0xfb,0xd4,0x22,0x1e,
+ 0x84,0x11,0x72,0x83,0x54,0xfa,0x30,0xa7,
+ 0x00,0x8f,0x15,0x4a,0x41,0xc7,0xfc,0x46,
+ 0x6b,0x46,0x45,0xdb,0xe2,0xe3,0x21,0x26,
+ 0x7f,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
+};
+
+struct ffdhe_group {
+ int bits;
+ int minsize;
+ const u8 *p;
+ const u8 *q;
+ const u8 *g;
+} ffdhe_group_map[] = {
+ {
+ .bits = 2048,
+ .minsize = 225,
+ .p = ffdhe2048_p,
+ .q = ffdhe2048_q,
+ .g = ffdhe2048_g,
+ },
+ {
+ .bits = 3072,
+ .minsize = 275,
+ .p = ffdhe3072_p,
+ .q = ffdhe3072_q,
+ .g = ffdhe3072_g,
+ },
+ {
+ .bits = 4096,
+ .minsize = 325,
+ .p = ffdhe4096_p,
+ .q = ffdhe4096_q,
+ .g = ffdhe4096_g,
+ },
+ {
+ .bits = 6144,
+ .minsize = 375,
+ .p = ffdhe6144_p,
+ .q = ffdhe6144_q,
+ .g = ffdhe6144_g,
+ },
+ {
+ .bits = 8192,
+ .minsize = 400,
+ .p = ffdhe8192_p,
+ .q = ffdhe8192_q,
+ .g = ffdhe8192_g,
+ },
+};
+
+int crypto_ffdhe_params(struct dh *p, int bits)
+{
+ struct ffdhe_group *grp = NULL;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(ffdhe_group_map); i++) {
+ if (ffdhe_group_map[i].bits == bits) {
+ grp = &ffdhe_group_map[i];
+ break;
+ }
+ }
+ if (!grp || !p)
+ return -EINVAL;
+
+ p->p_size = grp->bits / 8;
+ p->p = (u8 *)grp->p;
+ p->g_size = 1;
+ p->g = (u8 *)grp->g;
+ p->q_size = grp->bits / 8;
+ p->q = (u8 *)grp->q;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(crypto_ffdhe_params);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("FFDHE ephemeral parameters");
diff --git a/include/crypto/ffdhe.h b/include/crypto/ffdhe.h
new file mode 100644
index 000000000000..6cb9253ddb34
--- /dev/null
+++ b/include/crypto/ffdhe.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Finite-Field Diffie-Hellman definition according to RFC 7919
+ *
+ * Copyright (c) 2021, SUSE Software Products
+ * Authors: Hannes Reinecke <[email protected]>
+ */
+#ifndef _CRYPTO_FFDHE_
+#define _CRYPTO_FFDHE_
+
+/**
+ * crypto_ffdhe_params() - Generate FFDHE params
+ * @params: DH params
+ * @bits: Bitsize of the FFDHE parameters
+ *
+ * This functions sets the FFDHE parameter for @bits in @params.
+ * Valid bit sizes are 2048, 3072, 4096, 6144, or 8194.
+ *
+ * Returns: 0 on success, errno on failure.
+ */
+
+int crypto_ffdhe_params(struct dh *p, int bits);
+
+#endif /* _CRYPTO_FFDHE_H */
--
2.29.2

2021-09-10 06:44:23

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 08/12] nvme-auth: Diffie-Hellman key exchange support

Implement Diffie-Hellman key exchange using FFDHE groups
for NVMe In-Band Authentication.

Signed-off-by: Hannes Reinecke <[email protected]>
---
drivers/nvme/host/Kconfig | 1 +
drivers/nvme/host/auth.c | 190 ++++++++++++++++++++++++++++++++++----
drivers/nvme/host/auth.h | 8 ++
3 files changed, 182 insertions(+), 17 deletions(-)

diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig
index 97e8412dc42d..3ba46877d447 100644
--- a/drivers/nvme/host/Kconfig
+++ b/drivers/nvme/host/Kconfig
@@ -90,6 +90,7 @@ config NVME_AUTH
select CRYPTO_HMAC
select CRYPTO_SHA256
select CRYPTO_SHA512
+ select CRYPTO_FFDHE
help
This provides support for NVMe over Fabrics In-Band Authentication
for the NVMe over TCP transport.
diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
index 5393ac16a002..cdf64f8e14f3 100644
--- a/drivers/nvme/host/auth.c
+++ b/drivers/nvme/host/auth.c
@@ -36,6 +36,12 @@ struct nvme_dhchap_queue_context {
u8 c2[64];
u8 response[64];
u8 *host_response;
+ u8 *ctrl_key;
+ int ctrl_key_len;
+ u8 *host_key;
+ int host_key_len;
+ u8 *sess_key;
+ int sess_key_len;
};

static struct nvme_auth_dhgroup_map {
@@ -611,6 +617,7 @@ static int nvme_auth_process_dhchap_challenge(struct nvme_ctrl *ctrl,
struct nvmf_auth_dhchap_challenge_data *data = chap->buf;
size_t size = sizeof(*data) + data->hl + data->dhvlen;
const char *hmac_name;
+ const char *kpp_name;

if (chap->buf_size < size) {
chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
@@ -665,9 +672,9 @@ static int nvme_auth_process_dhchap_challenge(struct nvme_ctrl *ctrl,
chap->hash_len = data->hl;
dev_dbg(ctrl->device, "qid %d: selected hash %s\n",
chap->qid, hmac_name);
-
- gid_name = nvme_auth_dhgroup_kpp(data->dhgid);
- if (!gid_name) {
+select_kpp:
+ kpp_name = nvme_auth_dhgroup_kpp(data->dhgid);
+ if (!kpp_name) {
dev_warn(ctrl->device,
"qid %d: invalid DH group id %d\n",
chap->qid, data->dhgid);
@@ -676,6 +683,8 @@ static int nvme_auth_process_dhchap_challenge(struct nvme_ctrl *ctrl,
}

if (data->dhgid != NVME_AUTH_DHCHAP_DHGROUP_NULL) {
+ const char *gid_name = nvme_auth_dhgroup_name(data->dhgid);
+
if (data->dhvlen == 0) {
dev_warn(ctrl->device,
"qid %d: empty DH value\n",
@@ -683,31 +692,55 @@ static int nvme_auth_process_dhchap_challenge(struct nvme_ctrl *ctrl,
chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
return -EPROTO;
}
- chap->dh_tfm = crypto_alloc_kpp(gid_name, 0, 0);
+ if (chap->dh_tfm && chap->dhgroup_id == data->dhgid) {
+ dev_dbg(ctrl->device,
+ "qid %d: reuse existing DH group %s\n",
+ chap->qid, gid_name);
+ goto skip_kpp;
+ }
+ chap->dh_tfm = crypto_alloc_kpp(kpp_name, 0, 0);
if (IS_ERR(chap->dh_tfm)) {
int ret = PTR_ERR(chap->dh_tfm);

dev_warn(ctrl->device,
- "qid %d: failed to initialize %s\n",
+ "qid %d: failed to initialize DH group %s\n",
chap->qid, gid_name);
chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
chap->dh_tfm = NULL;
return ret;
}
- chap->dhgroup_id = data->dhgid;
- } else if (data->dhvlen != 0) {
- dev_warn(ctrl->device,
- "qid %d: invalid DH value for NULL DH\n",
- chap->qid);
- chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
- return -EPROTO;
+ /* Clear host key to avoid accidental reuse */
+ kfree_sensitive(chap->host_key);
+ chap->host_key_len = 0;
+ dev_dbg(ctrl->device, "qid %d: selected DH group %s\n",
+ chap->qid, gid_name);
+ } else {
+ if (data->dhvlen != 0) {
+ dev_warn(ctrl->device,
+ "qid %d: invalid DH value for NULL DH\n",
+ chap->qid);
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
+ return -EPROTO;
+ }
+ if (chap->dh_tfm) {
+ crypto_free_kpp(chap->dh_tfm);
+ chap->dh_tfm = NULL;
+ }
}
- dev_dbg(ctrl->device, "qid %d: selected DH group %s\n",
- chap->qid, gid_name);
-
-select_kpp:
+ chap->dhgroup_id = data->dhgid;
+skip_kpp:
chap->s1 = le32_to_cpu(data->seqnum);
memcpy(chap->c1, data->cval, chap->hash_len);
+ if (data->dhvlen) {
+ chap->ctrl_key = kmalloc(data->dhvlen, GFP_KERNEL);
+ if (!chap->ctrl_key)
+ return -ENOMEM;
+ chap->ctrl_key_len = data->dhvlen;
+ memcpy(chap->ctrl_key, data->cval + chap->hash_len,
+ data->dhvlen);
+ dev_dbg(ctrl->device, "ctrl public key %*ph\n",
+ (int)chap->ctrl_key_len, chap->ctrl_key);
+ }

return 0;
}
@@ -725,6 +758,8 @@ static int nvme_auth_set_dhchap_reply_data(struct nvme_ctrl *ctrl,
} else
memset(chap->c2, 0, chap->hash_len);

+ if (chap->host_key_len)
+ size += chap->host_key_len;

if (chap->buf_size < size) {
chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
@@ -735,7 +770,7 @@ static int nvme_auth_set_dhchap_reply_data(struct nvme_ctrl *ctrl,
data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_REPLY;
data->t_id = cpu_to_le16(chap->transaction);
data->hl = chap->hash_len;
- data->dhvlen = 0;
+ data->dhvlen = chap->host_key_len;
data->seqnum = cpu_to_le32(chap->s2);
memcpy(data->rval, chap->response, chap->hash_len);
if (ctrl->opts->dhchap_bidi) {
@@ -746,6 +781,13 @@ static int nvme_auth_set_dhchap_reply_data(struct nvme_ctrl *ctrl,
memcpy(data->rval + chap->hash_len, chap->c2,
chap->hash_len);
}
+ if (chap->host_key_len) {
+ dev_dbg(ctrl->device, "%s: qid %d host public key %*ph\n",
+ __func__, chap->qid,
+ chap->host_key_len, chap->host_key);
+ memcpy(data->rval + 2 * chap->hash_len, chap->host_key,
+ chap->host_key_len);
+ }
return size;
}

@@ -832,6 +874,27 @@ static int nvme_auth_dhchap_host_response(struct nvme_ctrl *ctrl,

dev_dbg(ctrl->device, "%s: qid %d host response seq %d transaction %d\n",
__func__, chap->qid, chap->s1, chap->transaction);
+
+ if (!chap->host_response) {
+ chap->host_response = nvme_auth_transform_key(ctrl->dhchap_key,
+ chap->hash_len, chap->hash_id,
+ ctrl->opts->host->nqn);
+ if (IS_ERR(chap->host_response)) {
+ ret = PTR_ERR(chap->host_response);
+ chap->host_response = NULL;
+ return ret;
+ }
+ }
+ ret = crypto_shash_setkey(chap->shash_tfm,
+ chap->host_response, chap->hash_len);
+ if (ret) {
+ dev_warn(ctrl->device, "qid %d: failed to set key, error %d\n",
+ chap->qid, ret);
+ goto out;
+ }
+ dev_dbg(ctrl->device,
+ "%s: using key %*ph\n", __func__,
+ (int)chap->hash_len, chap->host_response);
if (chap->dh_tfm) {
challenge = kmalloc(chap->hash_len, GFP_KERNEL);
if (!challenge) {
@@ -890,9 +953,28 @@ static int nvme_auth_dhchap_ctrl_response(struct nvme_ctrl *ctrl,
struct nvme_dhchap_queue_context *chap)
{
SHASH_DESC_ON_STACK(shash, chap->shash_tfm);
+ u8 *ctrl_response;
u8 buf[4], *challenge = chap->c2;
int ret;

+ ctrl_response = nvme_auth_transform_key(ctrl->dhchap_key,
+ chap->hash_len, chap->hash_id,
+ ctrl->opts->subsysnqn);
+ if (IS_ERR(ctrl_response)) {
+ ret = PTR_ERR(ctrl_response);
+ return ret;
+ }
+ ret = crypto_shash_setkey(chap->shash_tfm,
+ ctrl_response, ctrl->dhchap_key_len);
+ if (ret) {
+ dev_warn(ctrl->device, "qid %d: failed to set key, error %d\n",
+ chap->qid, ret);
+ goto out;
+ }
+ dev_dbg(ctrl->device,
+ "%s: using key %*ph\n", __func__,
+ (int)ctrl->dhchap_key_len, ctrl_response);
+
if (chap->dh_tfm) {
challenge = kmalloc(chap->hash_len, GFP_KERNEL);
if (!challenge) {
@@ -983,8 +1065,77 @@ int nvme_auth_generate_key(struct nvme_ctrl *ctrl)
}
EXPORT_SYMBOL_GPL(nvme_auth_generate_key);

+static int nvme_auth_dhchap_exponential(struct nvme_ctrl *ctrl,
+ struct nvme_dhchap_queue_context *chap)
+{
+ int ret;
+
+ if (chap->host_key && chap->host_key_len) {
+ dev_dbg(ctrl->device,
+ "qid %d: reusing host key\n", chap->qid);
+ goto gen_sesskey;
+ }
+ ret = nvme_auth_gen_privkey(chap->dh_tfm, chap->dhgroup_id);
+ if (ret < 0) {
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+ return ret;
+ }
+
+ chap->host_key_len =
+ nvme_auth_dhgroup_pubkey_size(chap->dhgroup_id);
+
+ chap->host_key = kzalloc(chap->host_key_len, GFP_KERNEL);
+ if (!chap->host_key) {
+ chap->host_key_len = 0;
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_FAILED;
+ return -ENOMEM;
+ }
+ ret = nvme_auth_gen_pubkey(chap->dh_tfm,
+ chap->host_key, chap->host_key_len);
+ if (ret) {
+ dev_dbg(ctrl->device,
+ "failed to generate public key, error %d\n", ret);
+ kfree(chap->host_key);
+ chap->host_key = NULL;
+ chap->host_key_len = 0;
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+ return ret;
+ }
+
+gen_sesskey:
+ chap->sess_key_len = chap->host_key_len;
+ chap->sess_key = kmalloc(chap->sess_key_len, GFP_KERNEL);
+ if (!chap->sess_key) {
+ chap->sess_key_len = 0;
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_FAILED;
+ return -ENOMEM;
+ }
+
+ ret = nvme_auth_gen_shared_secret(chap->dh_tfm,
+ chap->ctrl_key, chap->ctrl_key_len,
+ chap->sess_key, chap->sess_key_len);
+ if (ret) {
+ dev_dbg(ctrl->device,
+ "failed to generate shared secret, error %d\n", ret);
+ kfree_sensitive(chap->sess_key);
+ chap->sess_key = NULL;
+ chap->sess_key_len = 0;
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+ return ret;
+ }
+ dev_dbg(ctrl->device, "shared secret %*ph\n",
+ (int)chap->sess_key_len, chap->sess_key);
+ return 0;
+}
+
static void nvme_auth_reset(struct nvme_dhchap_queue_context *chap)
{
+ kfree_sensitive(chap->ctrl_key);
+ chap->ctrl_key = NULL;
+ chap->ctrl_key_len = 0;
+ kfree_sensitive(chap->sess_key);
+ chap->sess_key = NULL;
+ chap->sess_key_len = 0;
chap->status = 0;
chap->error = 0;
chap->s1 = 0;
@@ -998,6 +1149,11 @@ static void __nvme_auth_free(struct nvme_dhchap_queue_context *chap)
{
if (chap->shash_tfm)
crypto_free_shash(chap->shash_tfm);
+ if (chap->dh_tfm)
+ crypto_free_kpp(chap->dh_tfm);
+ kfree_sensitive(chap->ctrl_key);
+ kfree_sensitive(chap->host_key);
+ kfree_sensitive(chap->sess_key);
kfree_sensitive(chap->host_response);
kfree(chap->buf);
kfree(chap);
diff --git a/drivers/nvme/host/auth.h b/drivers/nvme/host/auth.h
index cf1255f9db6d..aec954e9de1e 100644
--- a/drivers/nvme/host/auth.h
+++ b/drivers/nvme/host/auth.h
@@ -21,5 +21,13 @@ int nvme_auth_hmac_id(const char *hmac_name);
unsigned char *nvme_auth_extract_secret(unsigned char *dhchap_secret,
size_t *dhchap_key_len);
u8 *nvme_auth_transform_key(u8 *key, size_t key_len, u8 key_hash, char *nqn);
+int nvme_auth_augmented_challenge(u8 hmac_id, u8 *skey, size_t skey_len,
+ u8 *challenge, u8 *aug, size_t hlen);
+int nvme_auth_gen_privkey(struct crypto_kpp *dh_tfm, int dh_gid);
+int nvme_auth_gen_pubkey(struct crypto_kpp *dh_tfm,
+ u8 *host_key, size_t host_key_len);
+int nvme_auth_gen_shared_secret(struct crypto_kpp *dh_tfm,
+ u8 *ctrl_key, size_t ctrl_key_len,
+ u8 *sess_key, size_t sess_key_len);

#endif /* _NVME_AUTH_H */
--
2.29.2

2021-09-10 06:44:38

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 01/12] crypto: add crypto_has_shash()

Add helper function to determine if a given synchronous hash is supported.

Signed-off-by: Hannes Reinecke <[email protected]>
---
crypto/shash.c | 6 ++++++
include/crypto/hash.h | 2 ++
2 files changed, 8 insertions(+)

diff --git a/crypto/shash.c b/crypto/shash.c
index 0a0a50cb694f..4c88e63b3350 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -521,6 +521,12 @@ struct crypto_shash *crypto_alloc_shash(const char *alg_name, u32 type,
}
EXPORT_SYMBOL_GPL(crypto_alloc_shash);

+int crypto_has_shash(const char *alg_name, u32 type, u32 mask)
+{
+ return crypto_type_has_alg(alg_name, &crypto_shash_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_has_shash);
+
static int shash_prepare_alg(struct shash_alg *alg)
{
struct crypto_alg *base = &alg->base;
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index f140e4643949..f5841992dc9b 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -718,6 +718,8 @@ static inline void ahash_request_set_crypt(struct ahash_request *req,
struct crypto_shash *crypto_alloc_shash(const char *alg_name, u32 type,
u32 mask);

+int crypto_has_shash(const char *alg_name, u32 type, u32 mask);
+
static inline struct crypto_tfm *crypto_shash_tfm(struct crypto_shash *tfm)
{
return &tfm->base;
--
2.29.2

2021-09-10 06:44:38

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 12/12] nvmet-auth: expire authentication sessions

Each authentication step is required to be completed within the
KATO interval (or two minutes if not set). So add a workqueue function
to reset the transaction ID and the expected next protocol step;
this will automatically the next authentication command referring
to the terminated authentication.

Signed-off-by: Hannes Reinecke <[email protected]>
---
drivers/nvme/target/auth.c | 1 +
drivers/nvme/target/fabrics-cmd-auth.c | 20 +++++++++++++++++++-
drivers/nvme/target/nvmet.h | 1 +
3 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/auth.c b/drivers/nvme/target/auth.c
index fe44593a37f8..c7c62ba089da 100644
--- a/drivers/nvme/target/auth.c
+++ b/drivers/nvme/target/auth.c
@@ -197,6 +197,7 @@ int nvmet_setup_auth(struct nvmet_ctrl *ctrl)

void nvmet_auth_sq_free(struct nvmet_sq *sq)
{
+ cancel_delayed_work(&sq->auth_expired_work);
kfree(sq->dhchap_c1);
sq->dhchap_c1 = NULL;
kfree(sq->dhchap_c2);
diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
index 2f1b95098917..7e7322846b82 100644
--- a/drivers/nvme/target/fabrics-cmd-auth.c
+++ b/drivers/nvme/target/fabrics-cmd-auth.c
@@ -12,9 +12,22 @@
#include "nvmet.h"
#include "../host/auth.h"

+static void nvmet_auth_expired_work(struct work_struct *work)
+{
+ struct nvmet_sq *sq = container_of(to_delayed_work(work),
+ struct nvmet_sq, auth_expired_work);
+
+ pr_debug("%s: ctrl %d qid %d transaction %u expired, resetting\n",
+ __func__, sq->ctrl->cntlid, sq->qid, sq->dhchap_tid);
+ sq->dhchap_step = NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE;
+ sq->dhchap_tid = -1;
+}
+
void nvmet_init_auth(struct nvmet_ctrl *ctrl, struct nvmet_req *req)
{
/* Initialize in-band authentication */
+ INIT_DELAYED_WORK(&req->sq->auth_expired_work,
+ nvmet_auth_expired_work);
req->sq->authenticated = false;
req->sq->dhchap_step = NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE;
req->cqe->result.u32 |= 0x2 << 16;
@@ -303,8 +316,13 @@ void nvmet_execute_auth_send(struct nvmet_req *req)
req->cqe->result.u64 = 0;
nvmet_req_complete(req, status);
if (req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2 &&
- req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_FAILURE2)
+ req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_FAILURE2) {
+ unsigned long auth_expire_secs = ctrl->kato ? ctrl->kato : 120;
+
+ mod_delayed_work(system_wq, &req->sq->auth_expired_work,
+ auth_expire_secs * HZ);
return;
+ }
/* Final states, clear up variables */
nvmet_auth_sq_free(req->sq);
if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_FAILURE2)
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index d0849404f398..84bf7043674e 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -109,6 +109,7 @@ struct nvmet_sq {
u32 sqhd;
bool sqhd_disabled;
#ifdef CONFIG_NVME_TARGET_AUTH
+ struct delayed_work auth_expired_work;
bool authenticated;
u16 dhchap_tid;
u16 dhchap_status;
--
2.29.2

2021-09-10 06:44:38

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 10/12] nvmet: Implement basic In-Band Authentication

Implement NVMe-oF In-Band authentication according to NVMe TPAR 8006.
This patch adds two additional configfs entries 'dhchap_key' and 'dhchap_hash'
to the 'host' configfs directory. The 'dhchap_key' needs to be
in the ASCII format as specified in NVMe 2.0 section 8.13.5.8 'Secret
representation'.
'dhchap_hash' is taken from the hash specified in the ASCII
representation of the key, or defaults to 'hmac(sha256)' if no
key transformation has been specified.

Signed-off-by: Hannes Reinecke <[email protected]>
---
drivers/nvme/target/Kconfig | 11 +
drivers/nvme/target/Makefile | 1 +
drivers/nvme/target/admin-cmd.c | 4 +
drivers/nvme/target/auth.c | 301 ++++++++++++++++
drivers/nvme/target/configfs.c | 71 +++-
drivers/nvme/target/core.c | 8 +
drivers/nvme/target/fabrics-cmd-auth.c | 464 +++++++++++++++++++++++++
drivers/nvme/target/fabrics-cmd.c | 30 +-
drivers/nvme/target/nvmet.h | 63 ++++
9 files changed, 950 insertions(+), 3 deletions(-)
create mode 100644 drivers/nvme/target/auth.c
create mode 100644 drivers/nvme/target/fabrics-cmd-auth.c

diff --git a/drivers/nvme/target/Kconfig b/drivers/nvme/target/Kconfig
index 973561c93888..70f3c385fc9f 100644
--- a/drivers/nvme/target/Kconfig
+++ b/drivers/nvme/target/Kconfig
@@ -83,3 +83,14 @@ config NVME_TARGET_TCP
devices over TCP.

If unsure, say N.
+
+config NVME_TARGET_AUTH
+ bool "NVMe over Fabrics In-band Authentication support"
+ depends on NVME_TARGET
+ select CRYPTO_HMAC
+ select CRYPTO_SHA256
+ select CRYPTO_SHA512
+ help
+ This enables support for NVMe over Fabrics In-band Authentication
+
+ If unsure, say N.
diff --git a/drivers/nvme/target/Makefile b/drivers/nvme/target/Makefile
index 9837e580fa7e..c66820102493 100644
--- a/drivers/nvme/target/Makefile
+++ b/drivers/nvme/target/Makefile
@@ -13,6 +13,7 @@ nvmet-y += core.o configfs.o admin-cmd.o fabrics-cmd.o \
discovery.o io-cmd-file.o io-cmd-bdev.o
nvmet-$(CONFIG_NVME_TARGET_PASSTHRU) += passthru.o
nvmet-$(CONFIG_BLK_DEV_ZONED) += zns.o
+nvmet-$(CONFIG_NVME_TARGET_AUTH) += fabrics-cmd-auth.o auth.o
nvme-loop-y += loop.o
nvmet-rdma-y += rdma.o
nvmet-fc-y += fc.o
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index aa6d84d8848e..868d65c869cd 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -1008,6 +1008,10 @@ u16 nvmet_parse_admin_cmd(struct nvmet_req *req)

if (nvme_is_fabrics(cmd))
return nvmet_parse_fabrics_cmd(req);
+
+ if (unlikely(!nvmet_check_auth_status(req)))
+ return NVME_SC_AUTH_REQUIRED | NVME_SC_DNR;
+
if (nvmet_req_subsys(req)->type == NVME_NQN_DISC)
return nvmet_parse_discovery_cmd(req);

diff --git a/drivers/nvme/target/auth.c b/drivers/nvme/target/auth.c
new file mode 100644
index 000000000000..5b5f3cd4f914
--- /dev/null
+++ b/drivers/nvme/target/auth.c
@@ -0,0 +1,301 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * NVMe over Fabrics DH-HMAC-CHAP authentication.
+ * Copyright (c) 2020 Hannes Reinecke, SUSE Software Solutions.
+ * All rights reserved.
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/err.h>
+#include <crypto/hash.h>
+#include <linux/crc32.h>
+#include <linux/base64.h>
+#include <linux/ctype.h>
+#include <linux/random.h>
+#include <asm/unaligned.h>
+
+#include "nvmet.h"
+#include "../host/auth.h"
+
+int nvmet_auth_set_host_key(struct nvmet_host *host, const char *secret)
+{
+ if (sscanf(secret, "DHHC-1:%hhd:%*s", &host->dhchap_key_hash) != 1)
+ return -EINVAL;
+ if (host->dhchap_key_hash > 3) {
+ pr_warn("Invalid DH-HMAC-CHAP hash id %d\n",
+ host->dhchap_key_hash);
+ return -EINVAL;
+ }
+ if (host->dhchap_key_hash > 0) {
+ /* Validate selected hash algorithm */
+ const char *hmac = nvme_auth_hmac_name(host->dhchap_key_hash);
+
+ if (!crypto_has_shash(hmac, 0, 0)) {
+ pr_err("DH-HMAC-CHAP hash %s unsupported\n", hmac);
+ host->dhchap_key_hash = -1;
+ return -ENOTSUPP;
+ }
+ /* Use this hash as default */
+ if (!host->dhchap_hash_id)
+ host->dhchap_hash_id = host->dhchap_key_hash;
+ }
+ host->dhchap_secret = kstrdup(secret, GFP_KERNEL);
+ if (!host->dhchap_secret)
+ return -ENOMEM;
+ /* Default to SHA256 */
+ if (!host->dhchap_hash_id)
+ host->dhchap_hash_id = NVME_AUTH_DHCHAP_SHA256;
+
+ pr_debug("Using hash %s\n",
+ nvme_auth_hmac_name(host->dhchap_hash_id));
+ return 0;
+}
+
+int nvmet_setup_auth(struct nvmet_ctrl *ctrl)
+{
+ int ret = 0;
+ struct nvmet_host_link *p;
+ struct nvmet_host *host = NULL;
+ const char *hash_name;
+
+ down_read(&nvmet_config_sem);
+ if (ctrl->subsys->type == NVME_NQN_DISC)
+ goto out_unlock;
+
+ list_for_each_entry(p, &ctrl->subsys->hosts, entry) {
+ pr_debug("check %s\n", nvmet_host_name(p->host));
+ if (strcmp(nvmet_host_name(p->host), ctrl->hostnqn))
+ continue;
+ host = p->host;
+ break;
+ }
+ if (!host) {
+ pr_debug("host %s not found\n", ctrl->hostnqn);
+ ret = -EPERM;
+ goto out_unlock;
+ }
+ if (!host->dhchap_secret) {
+ pr_debug("No authentication provided\n");
+ goto out_unlock;
+ }
+ if (ctrl->shash_tfm &&
+ host->dhchap_hash_id == ctrl->shash_id) {
+ pr_debug("Re-use existing hash ID %d\n",
+ ctrl->shash_id);
+ ret = 0;
+ goto out_unlock;
+ }
+ hash_name = nvme_auth_hmac_name(host->dhchap_hash_id);
+ if (!hash_name) {
+ pr_warn("Hash ID %d invalid\n", host->dhchap_hash_id);
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+ ctrl->shash_tfm = crypto_alloc_shash(hash_name, 0,
+ CRYPTO_ALG_ALLOCATES_MEMORY);
+ if (IS_ERR(ctrl->shash_tfm)) {
+ pr_err("failed to allocate shash %s\n", hash_name);
+ ret = PTR_ERR(ctrl->shash_tfm);
+ ctrl->shash_tfm = NULL;
+ goto out_unlock;
+ }
+ ctrl->shash_id = host->dhchap_hash_id;
+
+ /* Skip the 'DHHC-1:XX:' prefix */
+ ctrl->dhchap_key = nvme_auth_extract_secret(host->dhchap_secret + 10,
+ &ctrl->dhchap_key_len);
+ if (IS_ERR(ctrl->dhchap_key)) {
+ pr_debug("failed to extract host key, error %d\n", ret);
+ ret = PTR_ERR(ctrl->dhchap_key);
+ ctrl->dhchap_key = NULL;
+ goto out_free_hash;
+ }
+ pr_debug("%s: using key %*ph\n", __func__,
+ (int)ctrl->dhchap_key_len, ctrl->dhchap_key);
+out_free_hash:
+ if (ret) {
+ if (ctrl->dhchap_key) {
+ kfree_sensitive(ctrl->dhchap_key);
+ ctrl->dhchap_key = NULL;
+ }
+ crypto_free_shash(ctrl->shash_tfm);
+ ctrl->shash_tfm = NULL;
+ ctrl->shash_id = 0;
+ }
+out_unlock:
+ up_read(&nvmet_config_sem);
+
+ return ret;
+}
+
+void nvmet_auth_sq_free(struct nvmet_sq *sq)
+{
+ kfree(sq->dhchap_c1);
+ sq->dhchap_c1 = NULL;
+ kfree(sq->dhchap_c2);
+ sq->dhchap_c2 = NULL;
+ kfree(sq->dhchap_skey);
+ sq->dhchap_skey = NULL;
+}
+
+void nvmet_destroy_auth(struct nvmet_ctrl *ctrl)
+{
+ if (ctrl->shash_tfm) {
+ crypto_free_shash(ctrl->shash_tfm);
+ ctrl->shash_tfm = NULL;
+ ctrl->shash_id = 0;
+ }
+ if (ctrl->dhchap_key) {
+ kfree(ctrl->dhchap_key);
+ ctrl->dhchap_key = NULL;
+ }
+}
+
+bool nvmet_check_auth_status(struct nvmet_req *req)
+{
+ if (req->sq->ctrl->shash_tfm &&
+ !req->sq->authenticated)
+ return false;
+ return true;
+}
+
+int nvmet_auth_host_hash(struct nvmet_req *req, u8 *response,
+ unsigned int shash_len)
+{
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
+ SHASH_DESC_ON_STACK(shash, ctrl->shash_tfm);
+ u8 *challenge = req->sq->dhchap_c1, *host_response;
+ u8 buf[4];
+ int ret;
+
+ host_response = nvme_auth_transform_key(ctrl->dhchap_key,
+ shash_len, ctrl->shash_id,
+ ctrl->hostnqn);
+ if (IS_ERR(host_response))
+ return PTR_ERR(host_response);
+
+ ret = crypto_shash_setkey(ctrl->shash_tfm, host_response, shash_len);
+ if (ret) {
+ kfree_sensitive(host_response);
+ return ret;
+ }
+ if (ctrl->dh_gid != NVME_AUTH_DHCHAP_DHGROUP_NULL) {
+ ret = -ENOTSUPP;
+ goto out;
+ }
+
+ shash->tfm = ctrl->shash_tfm;
+ ret = crypto_shash_init(shash);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, challenge, shash_len);
+ if (ret)
+ goto out;
+ put_unaligned_le32(req->sq->dhchap_s1, buf);
+ ret = crypto_shash_update(shash, buf, 4);
+ if (ret)
+ goto out;
+ put_unaligned_le16(req->sq->dhchap_tid, buf);
+ ret = crypto_shash_update(shash, buf, 2);
+ if (ret)
+ goto out;
+ memset(buf, 0, 4);
+ ret = crypto_shash_update(shash, buf, 1);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, "HostHost", 8);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, ctrl->hostnqn, strlen(ctrl->hostnqn));
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, buf, 1);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, ctrl->subsysnqn,
+ strlen(ctrl->subsysnqn));
+ if (ret)
+ goto out;
+ ret = crypto_shash_final(shash, response);
+out:
+ if (challenge != req->sq->dhchap_c1)
+ kfree(challenge);
+ kfree_sensitive(host_response);
+ return 0;
+}
+
+int nvmet_auth_ctrl_hash(struct nvmet_req *req, u8 *response,
+ unsigned int shash_len)
+{
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
+ SHASH_DESC_ON_STACK(shash, ctrl->shash_tfm);
+ u8 *challenge = req->sq->dhchap_c2, *ctrl_response;
+ u8 buf[4];
+ int ret;
+
+ pr_debug("%s: ctrl %d hash seq %d transaction %u\n", __func__,
+ ctrl->cntlid, req->sq->dhchap_s2, req->sq->dhchap_tid);
+ pr_debug("%s: ctrl %d challenge %*ph\n", __func__,
+ ctrl->cntlid, shash_len, req->sq->dhchap_c2);
+ pr_debug("%s: ctrl %d subsysnqn %s\n", __func__,
+ ctrl->cntlid, ctrl->subsysnqn);
+ pr_debug("%s: ctrl %d hostnqn %s\n", __func__,
+ ctrl->cntlid, ctrl->hostnqn);
+
+ ctrl_response = nvme_auth_transform_key(ctrl->dhchap_key,
+ shash_len, ctrl->shash_id,
+ ctrl->subsysnqn);
+ if (IS_ERR(ctrl_response))
+ return PTR_ERR(ctrl_response);
+
+ ret = crypto_shash_setkey(ctrl->shash_tfm, ctrl_response, shash_len);
+ if (ret) {
+ kfree_sensitive(ctrl_response);
+ return ret;
+ }
+ if (ctrl->dh_gid != NVME_AUTH_DHCHAP_DHGROUP_NULL) {
+ ret = -ENOTSUPP;
+ goto out;
+ }
+
+ shash->tfm = ctrl->shash_tfm;
+ ret = crypto_shash_init(shash);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, challenge, shash_len);
+ if (ret)
+ goto out;
+ put_unaligned_le32(req->sq->dhchap_s2, buf);
+ ret = crypto_shash_update(shash, buf, 4);
+ if (ret)
+ goto out;
+ put_unaligned_le16(req->sq->dhchap_tid, buf);
+ ret = crypto_shash_update(shash, buf, 2);
+ if (ret)
+ goto out;
+ memset(buf, 0, 4);
+ ret = crypto_shash_update(shash, buf, 1);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, "Controller", 10);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, ctrl->subsysnqn,
+ strlen(ctrl->subsysnqn));
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, buf, 1);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, ctrl->hostnqn, strlen(ctrl->hostnqn));
+ if (ret)
+ goto out;
+ ret = crypto_shash_final(shash, response);
+out:
+ if (challenge != req->sq->dhchap_c2)
+ kfree(challenge);
+ kfree_sensitive(ctrl_response);
+ return 0;
+}
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index d784f3c200b4..7c13810a637f 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -11,8 +11,13 @@
#include <linux/ctype.h>
#include <linux/pci.h>
#include <linux/pci-p2pdma.h>
+#include <crypto/hash.h>
+#include <crypto/kpp.h>

#include "nvmet.h"
+#ifdef CONFIG_NVME_TARGET_AUTH
+#include "../host/auth.h"
+#endif

static const struct config_item_type nvmet_host_type;
static const struct config_item_type nvmet_subsys_type;
@@ -1657,10 +1662,71 @@ static const struct config_item_type nvmet_ports_type = {
static struct config_group nvmet_subsystems_group;
static struct config_group nvmet_ports_group;

-static void nvmet_host_release(struct config_item *item)
+#ifdef CONFIG_NVME_TARGET_AUTH
+static ssize_t nvmet_host_dhchap_key_show(struct config_item *item,
+ char *page)
+{
+ u8 *dhchap_secret = to_host(item)->dhchap_secret;
+
+ if (!dhchap_secret)
+ return sprintf(page, "\n");
+ return sprintf(page, "%s\n", dhchap_secret);
+}
+
+static ssize_t nvmet_host_dhchap_key_store(struct config_item *item,
+ const char *page, size_t count)
{
struct nvmet_host *host = to_host(item);
+ int ret;

+ ret = nvmet_auth_set_host_key(host, page);
+ if (ret < 0)
+ return ret;
+ return count;
+}
+
+CONFIGFS_ATTR(nvmet_host_, dhchap_key);
+
+static ssize_t nvmet_host_dhchap_hash_show(struct config_item *item,
+ char *page)
+{
+ struct nvmet_host *host = to_host(item);
+ const char *hash_name = nvme_auth_hmac_name(host->dhchap_hash_id);
+
+ return sprintf(page, "%s\n", hash_name ? hash_name : "none");
+}
+
+static ssize_t nvmet_host_dhchap_hash_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_host *host = to_host(item);
+ int hmac_id;
+
+ hmac_id = nvme_auth_hmac_id(page);
+ if (hmac_id < 0)
+ return -EINVAL;
+ if (!crypto_has_shash(nvme_auth_hmac_name(hmac_id), 0, 0))
+ return -ENOTSUPP;
+ host->dhchap_hash_id = hmac_id;
+ return count;
+}
+
+CONFIGFS_ATTR(nvmet_host_, dhchap_hash);
+
+static struct configfs_attribute *nvmet_host_attrs[] = {
+ &nvmet_host_attr_dhchap_key,
+ &nvmet_host_attr_dhchap_hash,
+ NULL,
+};
+#endif /* CONFIG_NVME_TARGET_AUTH */
+
+static void nvmet_host_release(struct config_item *item)
+{
+ struct nvmet_host *host = to_host(item);
+#ifdef CONFIG_NVME_TARGET_AUTH
+ if (host->dhchap_secret)
+ kfree(host->dhchap_secret);
+#endif
kfree(host);
}

@@ -1670,6 +1736,9 @@ static struct configfs_item_operations nvmet_host_item_ops = {

static const struct config_item_type nvmet_host_type = {
.ct_item_ops = &nvmet_host_item_ops,
+#ifdef CONFIG_NVME_TARGET_AUTH
+ .ct_attrs = nvmet_host_attrs,
+#endif
.ct_owner = THIS_MODULE,
};

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 6e253c3c5e0f..afe7ca1f9175 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -793,6 +793,7 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
wait_for_completion(&sq->confirm_done);
wait_for_completion(&sq->free_done);
percpu_ref_exit(&sq->ref);
+ nvmet_auth_sq_free(sq);

if (ctrl) {
/*
@@ -1268,6 +1269,11 @@ u16 nvmet_check_ctrl_status(struct nvmet_req *req)
req->cmd->common.opcode, req->sq->qid);
return NVME_SC_CMD_SEQ_ERROR | NVME_SC_DNR;
}
+
+ if (unlikely(!nvmet_check_auth_status(req))) {
+ pr_warn("qid %d not authenticated\n", req->sq->qid);
+ return NVME_SC_AUTH_REQUIRED | NVME_SC_DNR;
+ }
return 0;
}

@@ -1459,6 +1465,8 @@ static void nvmet_ctrl_free(struct kref *ref)
flush_work(&ctrl->async_event_work);
cancel_work_sync(&ctrl->fatal_err_work);

+ nvmet_destroy_auth(ctrl);
+
ida_simple_remove(&cntlid_ida, ctrl->cntlid);

nvmet_async_events_free(ctrl);
diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
new file mode 100644
index 000000000000..ab9dfc06bac0
--- /dev/null
+++ b/drivers/nvme/target/fabrics-cmd-auth.c
@@ -0,0 +1,464 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * NVMe over Fabrics DH-HMAC-CHAP authentication command handling.
+ * Copyright (c) 2020 Hannes Reinecke, SUSE Software Solutions.
+ * All rights reserved.
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/blkdev.h>
+#include <linux/random.h>
+#include <crypto/hash.h>
+#include <crypto/kpp.h>
+#include "nvmet.h"
+#include "../host/auth.h"
+
+void nvmet_init_auth(struct nvmet_ctrl *ctrl, struct nvmet_req *req)
+{
+ /* Initialize in-band authentication */
+ req->sq->authenticated = false;
+ req->sq->dhchap_step = NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE;
+ req->cqe->result.u32 |= 0x2 << 16;
+}
+
+static u16 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
+{
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
+ struct nvmf_auth_dhchap_negotiate_data *data = d;
+ int i, hash_id, null_dh = -1;
+
+ pr_debug("%s: ctrl %d qid %d: data sc_d %d napd %d authid %d halen %d dhlen %d\n",
+ __func__, ctrl->cntlid, req->sq->qid,
+ data->sc_c, data->napd, data->auth_protocol[0].dhchap.authid,
+ data->auth_protocol[0].dhchap.halen,
+ data->auth_protocol[0].dhchap.dhlen);
+ req->sq->dhchap_tid = le16_to_cpu(data->t_id);
+ if (data->sc_c)
+ return NVME_AUTH_DHCHAP_FAILURE_CONCAT_MISMATCH;
+
+ if (data->napd != 1)
+ return NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE;
+
+ if (data->auth_protocol[0].dhchap.authid !=
+ NVME_AUTH_DHCHAP_AUTH_ID)
+ return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+
+ hash_id = 0;
+ for (i = 0; i < data->auth_protocol[0].dhchap.halen; i++) {
+ if (ctrl->shash_id != data->auth_protocol[0].dhchap.idlist[i])
+ continue;
+ hash_id = ctrl->shash_id;
+ break;
+ }
+ if (hash_id == 0) {
+ pr_debug("%s: ctrl %d qid %d: no usable hash found\n",
+ __func__, ctrl->cntlid, req->sq->qid);
+ return NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE;
+ }
+
+ for (i = data->auth_protocol[0].dhchap.halen;
+ i < data->auth_protocol[0].dhchap.halen +
+ data->auth_protocol[0].dhchap.dhlen; i++) {
+ int dhgid = data->auth_protocol[0].dhchap.idlist[i];
+
+ if (dhgid == NVME_AUTH_DHCHAP_DHGROUP_NULL) {
+ null_dh = dhgid;
+ continue;
+ }
+ }
+ if (null_dh < 0) {
+ pr_debug("%s: ctrl %d qid %d: no DH group selected\n",
+ __func__, ctrl->cntlid, req->sq->qid);
+ return NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
+ }
+ ctrl->dh_gid = null_dh;
+ pr_debug("%s: ctrl %d qid %d: DH group %s (%d)\n",
+ __func__, ctrl->cntlid, req->sq->qid,
+ nvme_auth_dhgroup_name(ctrl->dh_gid), ctrl->dh_gid);
+ return 0;
+}
+
+static u16 nvmet_auth_reply(struct nvmet_req *req, void *d)
+{
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
+ struct nvmf_auth_dhchap_reply_data *data = d;
+ int hash_len = crypto_shash_digestsize(ctrl->shash_tfm);
+ u8 *response;
+
+ pr_debug("%s: ctrl %d qid %d: data hl %d cvalid %d dhvlen %d\n",
+ __func__, ctrl->cntlid, req->sq->qid,
+ data->hl, data->cvalid, data->dhvlen);
+ if (data->hl != hash_len)
+ return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+
+ if (data->dhvlen) {
+ return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+ }
+
+ response = kmalloc(data->hl, GFP_KERNEL);
+ if (!response)
+ return NVME_AUTH_DHCHAP_FAILURE_FAILED;
+
+ if (nvmet_auth_host_hash(req, response, data->hl) < 0) {
+ pr_debug("ctrl %d qid %d DH-HMAC-CHAP hash failed\n",
+ ctrl->cntlid, req->sq->qid);
+ kfree(response);
+ return NVME_AUTH_DHCHAP_FAILURE_FAILED;
+ }
+
+ if (memcmp(data->rval, response, data->hl)) {
+ pr_info("ctrl %d qid %d DH-HMAC-CHAP response mismatch\n",
+ ctrl->cntlid, req->sq->qid);
+ kfree(response);
+ return NVME_AUTH_DHCHAP_FAILURE_FAILED;
+ }
+ kfree(response);
+ pr_info("ctrl %d qid %d DH-HMAC-CHAP host authenticated\n",
+ ctrl->cntlid, req->sq->qid);
+ if (data->cvalid) {
+ req->sq->dhchap_c2 = kmalloc(data->hl, GFP_KERNEL);
+ if (!req->sq->dhchap_c2)
+ return NVME_AUTH_DHCHAP_FAILURE_FAILED;
+ memcpy(req->sq->dhchap_c2, data->rval + data->hl, data->hl);
+
+ pr_debug("ctrl %d qid %d challenge %*ph\n",
+ ctrl->cntlid, req->sq->qid, data->hl,
+ req->sq->dhchap_c2);
+ req->sq->dhchap_s2 = le32_to_cpu(data->seqnum);
+ } else
+ req->sq->dhchap_c2 = NULL;
+
+ return 0;
+}
+
+static u16 nvmet_auth_failure2(struct nvmet_req *req, void *d)
+{
+ struct nvmf_auth_dhchap_failure_data *data = d;
+
+ return data->rescode_exp;
+}
+
+void nvmet_execute_auth_send(struct nvmet_req *req)
+{
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
+ struct nvmf_auth_dhchap_success2_data *data;
+ void *d;
+ u32 tl;
+ u16 status = 0;
+
+ if (req->cmd->auth_send.secp != NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER) {
+ status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
+ req->error_loc =
+ offsetof(struct nvmf_auth_send_command, secp);
+ goto done;
+ }
+ if (req->cmd->auth_send.spsp0 != 0x01) {
+ status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
+ req->error_loc =
+ offsetof(struct nvmf_auth_send_command, spsp0);
+ goto done;
+ }
+ if (req->cmd->auth_send.spsp1 != 0x01) {
+ status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
+ req->error_loc =
+ offsetof(struct nvmf_auth_send_command, spsp1);
+ goto done;
+ }
+ tl = le32_to_cpu(req->cmd->auth_send.tl);
+ if (!tl) {
+ status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
+ req->error_loc =
+ offsetof(struct nvmf_auth_send_command, tl);
+ goto done;
+ }
+ if (!nvmet_check_transfer_len(req, tl)) {
+ pr_debug("%s: transfer length mismatch (%u)\n", __func__, tl);
+ return;
+ }
+
+ d = kmalloc(tl, GFP_KERNEL);
+ if (!d) {
+ status = NVME_SC_INTERNAL;
+ goto done;
+ }
+
+ status = nvmet_copy_from_sgl(req, 0, d, tl);
+ if (status) {
+ kfree(d);
+ goto done;
+ }
+
+ data = d;
+ pr_debug("%s: ctrl %d qid %d type %d id %d step %x\n", __func__,
+ ctrl->cntlid, req->sq->qid, data->auth_type, data->auth_id,
+ req->sq->dhchap_step);
+ if (data->auth_type != NVME_AUTH_COMMON_MESSAGES &&
+ data->auth_type != NVME_AUTH_DHCHAP_MESSAGES)
+ goto done_failure1;
+ if (data->auth_type == NVME_AUTH_COMMON_MESSAGES) {
+ if (data->auth_id == NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE) {
+ /* Restart negotiation */
+ pr_debug("%s: ctrl %d qid %d reset negotiation\n", __func__,
+ ctrl->cntlid, req->sq->qid);
+ req->sq->dhchap_step = NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE;
+ } else if (data->auth_id != req->sq->dhchap_step)
+ goto done_failure1;
+ /* Validate negotiation parameters */
+ status = nvmet_auth_negotiate(req, d);
+ if (status == 0)
+ req->sq->dhchap_step =
+ NVME_AUTH_DHCHAP_MESSAGE_CHALLENGE;
+ else {
+ req->sq->dhchap_step =
+ NVME_AUTH_DHCHAP_MESSAGE_FAILURE1;
+ req->sq->dhchap_status = status;
+ status = 0;
+ }
+ goto done_kfree;
+ }
+ if (data->auth_id != req->sq->dhchap_step) {
+ pr_debug("%s: ctrl %d qid %d step mismatch (%d != %d)\n",
+ __func__, ctrl->cntlid, req->sq->qid,
+ data->auth_id, req->sq->dhchap_step);
+ goto done_failure1;
+ }
+ if (le16_to_cpu(data->t_id) != req->sq->dhchap_tid) {
+ pr_debug("%s: ctrl %d qid %d invalid transaction %d (expected %d)\n",
+ __func__, ctrl->cntlid, req->sq->qid,
+ le16_to_cpu(data->t_id),
+ req->sq->dhchap_tid);
+ req->sq->dhchap_step =
+ NVME_AUTH_DHCHAP_MESSAGE_FAILURE1;
+ req->sq->dhchap_status =
+ NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+ goto done_kfree;
+ }
+ switch (data->auth_id) {
+ case NVME_AUTH_DHCHAP_MESSAGE_REPLY:
+ status = nvmet_auth_reply(req, d);
+ if (status == 0)
+ req->sq->dhchap_step =
+ NVME_AUTH_DHCHAP_MESSAGE_SUCCESS1;
+ else {
+ req->sq->dhchap_step =
+ NVME_AUTH_DHCHAP_MESSAGE_FAILURE1;
+ req->sq->dhchap_status = status;
+ status = 0;
+ }
+ goto done_kfree;
+ break;
+ case NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2:
+ req->sq->authenticated = true;
+ pr_debug("%s: ctrl %d qid %d authenticated\n",
+ __func__, ctrl->cntlid, req->sq->qid);
+ goto done_kfree;
+ break;
+ case NVME_AUTH_DHCHAP_MESSAGE_FAILURE2:
+ status = nvmet_auth_failure2(req, d);
+ if (status) {
+ pr_warn("ctrl %d qid %d: authentication failed (%d)\n",
+ ctrl->cntlid, req->sq->qid, status);
+ req->sq->dhchap_status = status;
+ status = 0;
+ }
+ goto done_kfree;
+ break;
+ default:
+ req->sq->dhchap_status =
+ NVME_AUTH_DHCHAP_FAILURE_INCORRECT_MESSAGE;
+ req->sq->dhchap_step =
+ NVME_AUTH_DHCHAP_MESSAGE_FAILURE2;
+ goto done_kfree;
+ break;
+ }
+done_failure1:
+ req->sq->dhchap_status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_MESSAGE;
+ req->sq->dhchap_step = NVME_AUTH_DHCHAP_MESSAGE_FAILURE2;
+
+done_kfree:
+ kfree(d);
+done:
+ pr_debug("%s: ctrl %d qid %d dhchap status %x step %x\n", __func__,
+ ctrl->cntlid, req->sq->qid,
+ req->sq->dhchap_status, req->sq->dhchap_step);
+ if (status)
+ pr_debug("%s: ctrl %d qid %d nvme status %x error loc %d\n",
+ __func__, ctrl->cntlid, req->sq->qid,
+ status, req->error_loc);
+ req->cqe->result.u64 = 0;
+ nvmet_req_complete(req, status);
+ if (req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2 &&
+ req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_FAILURE2)
+ return;
+ /* Final states, clear up variables */
+ nvmet_auth_sq_free(req->sq);
+ if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_FAILURE2)
+ nvmet_ctrl_fatal_error(ctrl);
+}
+
+static int nvmet_auth_challenge(struct nvmet_req *req, void *d, int al)
+{
+ struct nvmf_auth_dhchap_challenge_data *data = d;
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
+ int ret = 0;
+ int hash_len = crypto_shash_digestsize(ctrl->shash_tfm);
+ int data_size = sizeof(*d) + hash_len;
+
+ if (al < data_size) {
+ pr_debug("%s: buffer too small (al %d need %d)\n", __func__,
+ al, data_size);
+ return -EINVAL;
+ }
+ memset(data, 0, data_size);
+ req->sq->dhchap_s1 = ctrl->dhchap_seqnum++;
+ data->auth_type = NVME_AUTH_DHCHAP_MESSAGES;
+ data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_CHALLENGE;
+ data->t_id = cpu_to_le16(req->sq->dhchap_tid);
+ data->hashid = ctrl->shash_id;
+ data->hl = hash_len;
+ data->seqnum = cpu_to_le32(req->sq->dhchap_s1);
+ req->sq->dhchap_c1 = kmalloc(data->hl, GFP_KERNEL);
+ if (!req->sq->dhchap_c1)
+ return -ENOMEM;
+ get_random_bytes(req->sq->dhchap_c1, data->hl);
+ memcpy(data->cval, req->sq->dhchap_c1, data->hl);
+ pr_debug("%s: ctrl %d qid %d seq %d transaction %d hl %d dhvlen %d\n",
+ __func__, ctrl->cntlid, req->sq->qid, req->sq->dhchap_s1,
+ req->sq->dhchap_tid, data->hl, data->dhvlen);
+ return ret;
+}
+
+static int nvmet_auth_success1(struct nvmet_req *req, void *d, int al)
+{
+ struct nvmf_auth_dhchap_success1_data *data = d;
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
+ int hash_len = crypto_shash_digestsize(ctrl->shash_tfm);
+
+ WARN_ON(al < sizeof(*data));
+ memset(data, 0, sizeof(*data));
+ data->auth_type = NVME_AUTH_DHCHAP_MESSAGES;
+ data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_SUCCESS1;
+ data->t_id = cpu_to_le16(req->sq->dhchap_tid);
+ data->hl = hash_len;
+ if (req->sq->dhchap_c2) {
+ if (nvmet_auth_ctrl_hash(req, data->rval, data->hl))
+ return NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE;
+ data->rvalid = 1;
+ pr_debug("ctrl %d qid %d response %*ph\n",
+ ctrl->cntlid, req->sq->qid, data->hl, data->rval);
+ }
+ return 0;
+}
+
+static void nvmet_auth_failure1(struct nvmet_req *req, void *d, int al)
+{
+ struct nvmf_auth_dhchap_failure_data *data = d;
+
+ WARN_ON(al < sizeof(*data));
+ data->auth_type = NVME_AUTH_COMMON_MESSAGES;
+ data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_FAILURE1;
+ data->t_id = cpu_to_le32(req->sq->dhchap_tid);
+ data->rescode = NVME_AUTH_DHCHAP_FAILURE_REASON_FAILED;
+ data->rescode_exp = req->sq->dhchap_status;
+}
+
+void nvmet_execute_auth_receive(struct nvmet_req *req)
+{
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
+ void *d;
+ u32 al;
+ u16 status = 0;
+
+ if (req->cmd->auth_receive.secp != NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER) {
+ status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
+ req->error_loc =
+ offsetof(struct nvmf_auth_receive_command, secp);
+ goto done;
+ }
+ if (req->cmd->auth_receive.spsp0 != 0x01) {
+ status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
+ req->error_loc =
+ offsetof(struct nvmf_auth_receive_command, spsp0);
+ goto done;
+ }
+ if (req->cmd->auth_receive.spsp1 != 0x01) {
+ status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
+ req->error_loc =
+ offsetof(struct nvmf_auth_receive_command, spsp1);
+ goto done;
+ }
+ al = le32_to_cpu(req->cmd->auth_receive.al);
+ if (!al) {
+ status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
+ req->error_loc =
+ offsetof(struct nvmf_auth_receive_command, al);
+ goto done;
+ }
+ if (!nvmet_check_transfer_len(req, al)) {
+ pr_debug("%s: transfer length mismatch (%u)\n", __func__, al);
+ return;
+ }
+
+ d = kmalloc(al, GFP_KERNEL);
+ if (!d) {
+ status = NVME_SC_INTERNAL;
+ goto done;
+ }
+ pr_debug("%s: ctrl %d qid %d step %x\n", __func__,
+ ctrl->cntlid, req->sq->qid, req->sq->dhchap_step);
+ switch (req->sq->dhchap_step) {
+ case NVME_AUTH_DHCHAP_MESSAGE_CHALLENGE:
+ status = nvmet_auth_challenge(req, d, al);
+ if (status < 0) {
+ pr_warn("ctrl %d qid %d: challenge error (%d)\n",
+ ctrl->cntlid, req->sq->qid, status);
+ status = NVME_SC_INTERNAL;
+ break;
+ }
+ if (status) {
+ req->sq->dhchap_status = status;
+ nvmet_auth_failure1(req, d, al);
+ pr_warn("ctrl %d qid %d: challenge status (%x)\n",
+ ctrl->cntlid, req->sq->qid,
+ req->sq->dhchap_status);
+ status = 0;
+ break;
+ }
+ req->sq->dhchap_step = NVME_AUTH_DHCHAP_MESSAGE_REPLY;
+ break;
+ case NVME_AUTH_DHCHAP_MESSAGE_SUCCESS1:
+ status = nvmet_auth_success1(req, d, al);
+ if (status) {
+ req->sq->dhchap_status = status;
+ nvmet_auth_failure1(req, d, al);
+ pr_warn("ctrl %d qid %d: success1 status (%x)\n",
+ ctrl->cntlid, req->sq->qid,
+ req->sq->dhchap_status);
+ break;
+ }
+ req->sq->dhchap_step = NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2;
+ break;
+ case NVME_AUTH_DHCHAP_MESSAGE_FAILURE1:
+ nvmet_auth_failure1(req, d, al);
+ pr_warn("ctrl %d qid %d failure1 (%x)\n",
+ ctrl->cntlid, req->sq->qid, req->sq->dhchap_status);
+ break;
+ default:
+ pr_warn("ctrl %d qid %d unhandled step (%d)\n",
+ ctrl->cntlid, req->sq->qid, req->sq->dhchap_step);
+ req->sq->dhchap_step = NVME_AUTH_DHCHAP_MESSAGE_FAILURE1;
+ req->sq->dhchap_status = NVME_AUTH_DHCHAP_FAILURE_FAILED;
+ nvmet_auth_failure1(req, d, al);
+ status = 0;
+ break;
+ }
+
+ status = nvmet_copy_to_sgl(req, 0, d, al);
+ kfree(d);
+done:
+ req->cqe->result.u64 = 0;
+ nvmet_req_complete(req, status);
+ if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_FAILURE1) {
+ nvmet_auth_sq_free(req->sq);
+ nvmet_ctrl_fatal_error(ctrl);
+ }
+}
diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
index 7d0454cee920..d5a4a9a68ee1 100644
--- a/drivers/nvme/target/fabrics-cmd.c
+++ b/drivers/nvme/target/fabrics-cmd.c
@@ -93,6 +93,14 @@ u16 nvmet_parse_fabrics_cmd(struct nvmet_req *req)
case nvme_fabrics_type_property_get:
req->execute = nvmet_execute_prop_get;
break;
+#ifdef CONFIG_NVME_TARGET_AUTH
+ case nvme_fabrics_type_auth_send:
+ req->execute = nvmet_execute_auth_send;
+ break;
+ case nvme_fabrics_type_auth_receive:
+ req->execute = nvmet_execute_auth_receive;
+ break;
+#endif
default:
pr_debug("received unknown capsule type 0x%x\n",
cmd->fabrics.fctype);
@@ -173,6 +181,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
struct nvmf_connect_data *d;
struct nvmet_ctrl *ctrl = NULL;
u16 status = 0;
+ int ret;

if (!nvmet_check_transfer_len(req, sizeof(struct nvmf_connect_data)))
return;
@@ -215,17 +224,31 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)

uuid_copy(&ctrl->hostid, &d->hostid);

+ ret = nvmet_setup_auth(ctrl);
+ if (ret < 0) {
+ pr_err("Failed to setup authentication, error %d\n", ret);
+ nvmet_ctrl_put(ctrl);
+ if (ret == -EPERM)
+ status = (NVME_SC_CONNECT_INVALID_HOST | NVME_SC_DNR);
+ else
+ status = NVME_SC_INTERNAL;
+ goto out;
+ }
+
status = nvmet_install_queue(ctrl, req);
if (status) {
nvmet_ctrl_put(ctrl);
goto out;
}

- pr_info("creating controller %d for subsystem %s for NQN %s%s.\n",
+ pr_info("creating controller %d for subsystem %s for NQN %s%s%s.\n",
ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn,
- ctrl->pi_support ? " T10-PI is enabled" : "");
+ ctrl->pi_support ? " T10-PI is enabled" : "",
+ nvmet_has_auth(ctrl) ? " with DH-HMAC-CHAP" : "");
req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);

+ if (nvmet_has_auth(ctrl))
+ nvmet_init_auth(ctrl, req);
out:
kfree(d);
complete:
@@ -285,6 +308,9 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);

pr_debug("adding queue %d to ctrl %d.\n", qid, ctrl->cntlid);
+ req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
+ if (nvmet_has_auth(ctrl))
+ nvmet_init_auth(ctrl, req);

out:
kfree(d);
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 7143c7fa7464..ab25f9e18027 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -108,6 +108,18 @@ struct nvmet_sq {
u16 size;
u32 sqhd;
bool sqhd_disabled;
+#ifdef CONFIG_NVME_TARGET_AUTH
+ bool authenticated;
+ u16 dhchap_tid;
+ u16 dhchap_status;
+ int dhchap_step;
+ u8 *dhchap_c1;
+ u8 *dhchap_c2;
+ u32 dhchap_s1;
+ u32 dhchap_s2;
+ u8 *dhchap_skey;
+ int dhchap_skey_len;
+#endif
struct completion free_done;
struct completion confirm_done;
};
@@ -209,6 +221,15 @@ struct nvmet_ctrl {
u64 err_counter;
struct nvme_error_slot slots[NVMET_ERROR_LOG_SLOTS];
bool pi_support;
+#ifdef CONFIG_NVME_TARGET_AUTH
+ u32 dhchap_seqnum;
+ u8 *dhchap_key;
+ size_t dhchap_key_len;
+ struct crypto_shash *shash_tfm;
+ u8 shash_id;
+ u32 dh_gid;
+ u32 dh_keysize;
+#endif
};

struct nvmet_subsys {
@@ -270,6 +291,10 @@ static inline struct nvmet_subsys *namespaces_to_subsys(

struct nvmet_host {
struct config_group group;
+ u8 *dhchap_secret;
+ u8 dhchap_key_hash;
+ u8 dhchap_hash_id;
+ u8 dhchap_dhgroup_id;
};

static inline struct nvmet_host *to_host(struct config_item *item)
@@ -660,4 +685,42 @@ static inline void nvmet_req_bio_put(struct nvmet_req *req, struct bio *bio)
bio_put(bio);
}

+#ifdef CONFIG_NVME_TARGET_AUTH
+void nvmet_execute_auth_send(struct nvmet_req *req);
+void nvmet_execute_auth_receive(struct nvmet_req *req);
+int nvmet_auth_set_host_key(struct nvmet_host *host, const char *secret);
+int nvmet_auth_set_host_hash(struct nvmet_host *host, const char *hash);
+int nvmet_setup_auth(struct nvmet_ctrl *ctrl);
+void nvmet_init_auth(struct nvmet_ctrl *ctrl, struct nvmet_req *req);
+void nvmet_destroy_auth(struct nvmet_ctrl *ctrl);
+void nvmet_auth_sq_free(struct nvmet_sq *sq);
+bool nvmet_check_auth_status(struct nvmet_req *req);
+int nvmet_auth_host_hash(struct nvmet_req *req, u8 *response,
+ unsigned int hash_len);
+int nvmet_auth_ctrl_hash(struct nvmet_req *req, u8 *response,
+ unsigned int hash_len);
+static inline bool nvmet_has_auth(struct nvmet_ctrl *ctrl)
+{
+ return ctrl->shash_tfm != NULL;
+}
+#else
+static inline int nvmet_setup_auth(struct nvmet_ctrl *ctrl)
+{
+ return 0;
+}
+static inline void nvmet_init_auth(struct nvmet_ctrl *ctrl,
+ struct nvmet_req *req) {};
+static inline void nvmet_destroy_auth(struct nvmet_ctrl *ctrl) {};
+static inline void nvmet_auth_sq_free(struct nvmet_sq *sq) {};
+static inline bool nvmet_check_auth_status(struct nvmet_req *req)
+{
+ return true;
+}
+static inline bool nvmet_has_auth(struct nvmet_ctrl *ctrl)
+{
+ return false;
+}
+static inline const char *nvmet_dhchap_dhgroup_name(int dhgid) { return NULL; }
+#endif
+
#endif /* _NVMET_H */
--
2.29.2

2021-09-10 06:44:58

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 11/12] nvmet-auth: Diffie-Hellman key exchange support

Implement Diffie-Hellman key exchange using FFDHE groups for NVMe
In-Band Authentication.
This patch adds a new host configfs attribute 'dhchap_dhgroup' to
select the FFDHE group to use.

Signed-off-by: Hannes Reinecke <[email protected]>
---
drivers/nvme/target/Kconfig | 1 +
drivers/nvme/target/auth.c | 148 ++++++++++++++++++++++++-
drivers/nvme/target/configfs.c | 31 ++++++
drivers/nvme/target/fabrics-cmd-auth.c | 30 ++++-
drivers/nvme/target/nvmet.h | 6 +
5 files changed, 209 insertions(+), 7 deletions(-)

diff --git a/drivers/nvme/target/Kconfig b/drivers/nvme/target/Kconfig
index 70f3c385fc9f..2e41d70fd881 100644
--- a/drivers/nvme/target/Kconfig
+++ b/drivers/nvme/target/Kconfig
@@ -90,6 +90,7 @@ config NVME_TARGET_AUTH
select CRYPTO_HMAC
select CRYPTO_SHA256
select CRYPTO_SHA512
+ select CRYPTO_FFDHE
help
This enables support for NVMe over Fabrics In-band Authentication

diff --git a/drivers/nvme/target/auth.c b/drivers/nvme/target/auth.c
index 5b5f3cd4f914..fe44593a37f8 100644
--- a/drivers/nvme/target/auth.c
+++ b/drivers/nvme/target/auth.c
@@ -53,6 +53,71 @@ int nvmet_auth_set_host_key(struct nvmet_host *host, const char *secret)
return 0;
}

+int nvmet_setup_dhgroup(struct nvmet_ctrl *ctrl, int dhgroup_id)
+{
+ struct nvmet_host_link *p;
+ struct nvmet_host *host = NULL;
+ const char *dhgroup_kpp;
+ int ret = -ENOTSUPP;
+
+ if (dhgroup_id == NVME_AUTH_DHCHAP_DHGROUP_NULL)
+ return 0;
+
+ down_read(&nvmet_config_sem);
+ if (ctrl->subsys->type == NVME_NQN_DISC)
+ goto out_unlock;
+
+ list_for_each_entry(p, &ctrl->subsys->hosts, entry) {
+ if (strcmp(nvmet_host_name(p->host), ctrl->hostnqn))
+ continue;
+ host = p->host;
+ break;
+ }
+ if (!host) {
+ pr_debug("host %s not found\n", ctrl->hostnqn);
+ ret = -ENXIO;
+ goto out_unlock;
+ }
+
+ if (host->dhchap_dhgroup_id != dhgroup_id) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+ if (ctrl->dh_tfm) {
+ if (ctrl->dh_gid == dhgroup_id) {
+ pr_debug("reuse existing DH group %d\n", dhgroup_id);
+ ret = 0;
+ } else {
+ pr_debug("DH group mismatch (selected %d, requested %d)\n",
+ ctrl->dh_gid, dhgroup_id);
+ ret = -EINVAL;
+ }
+ goto out_unlock;
+ }
+
+ dhgroup_kpp = nvme_auth_dhgroup_kpp(dhgroup_id);
+ if (!dhgroup_kpp) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+ ctrl->dh_tfm = crypto_alloc_kpp(dhgroup_kpp, 0, 0);
+ if (IS_ERR(ctrl->dh_tfm)) {
+ pr_debug("failed to setup DH group %d, err %ld\n",
+ dhgroup_id, PTR_ERR(ctrl->dh_tfm));
+ ret = PTR_ERR(ctrl->dh_tfm);
+ ctrl->dh_tfm = NULL;
+ } else {
+ ctrl->dh_gid = dhgroup_id;
+ ctrl->dh_keysize = nvme_auth_dhgroup_pubkey_size(dhgroup_id);
+ ret = 0;
+ }
+
+out_unlock:
+ up_read(&nvmet_config_sem);
+
+ return ret;
+}
+
int nvmet_setup_auth(struct nvmet_ctrl *ctrl)
{
int ret = 0;
@@ -147,6 +212,11 @@ void nvmet_destroy_auth(struct nvmet_ctrl *ctrl)
ctrl->shash_tfm = NULL;
ctrl->shash_id = 0;
}
+ if (ctrl->dh_tfm) {
+ crypto_free_kpp(ctrl->dh_tfm);
+ ctrl->dh_tfm = NULL;
+ ctrl->dh_gid = 0;
+ }
if (ctrl->dhchap_key) {
kfree(ctrl->dhchap_key);
ctrl->dhchap_key = NULL;
@@ -182,8 +252,18 @@ int nvmet_auth_host_hash(struct nvmet_req *req, u8 *response,
return ret;
}
if (ctrl->dh_gid != NVME_AUTH_DHCHAP_DHGROUP_NULL) {
- ret = -ENOTSUPP;
- goto out;
+ challenge = kmalloc(shash_len, GFP_KERNEL);
+ if (!challenge) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ ret = nvme_auth_augmented_challenge(ctrl->shash_id,
+ req->sq->dhchap_skey,
+ req->sq->dhchap_skey_len,
+ req->sq->dhchap_c1,
+ challenge, shash_len);
+ if (ret)
+ goto out;
}

shash->tfm = ctrl->shash_tfm;
@@ -256,8 +336,18 @@ int nvmet_auth_ctrl_hash(struct nvmet_req *req, u8 *response,
return ret;
}
if (ctrl->dh_gid != NVME_AUTH_DHCHAP_DHGROUP_NULL) {
- ret = -ENOTSUPP;
- goto out;
+ challenge = kmalloc(shash_len, GFP_KERNEL);
+ if (!challenge) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ ret = nvme_auth_augmented_challenge(ctrl->shash_id,
+ req->sq->dhchap_skey,
+ req->sq->dhchap_skey_len,
+ req->sq->dhchap_c2,
+ challenge, shash_len);
+ if (ret)
+ goto out;
}

shash->tfm = ctrl->shash_tfm;
@@ -299,3 +389,53 @@ int nvmet_auth_ctrl_hash(struct nvmet_req *req, u8 *response,
kfree_sensitive(ctrl_response);
return 0;
}
+
+int nvmet_auth_ctrl_exponential(struct nvmet_req *req,
+ u8 *buf, int buf_size)
+{
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
+ int ret;
+
+ if (!ctrl->dh_tfm) {
+ pr_warn("No DH algorithm!\n");
+ return -ENOKEY;
+ }
+ ret = nvme_auth_gen_pubkey(ctrl->dh_tfm, buf, buf_size);
+ if (ret == -EOVERFLOW) {
+ pr_debug("public key buffer too small, need %d is %d\n",
+ crypto_kpp_maxsize(ctrl->dh_tfm), buf_size);
+ ret = -ENOKEY;
+ } else if (ret) {
+ pr_debug("failed to generate public key, err %d\n", ret);
+ ret = -ENOKEY;
+ } else
+ pr_debug("%s: ctrl public key %*ph\n", __func__,
+ (int)buf_size, buf);
+
+ return ret;
+}
+
+int nvmet_auth_ctrl_sesskey(struct nvmet_req *req,
+ u8 *pkey, int pkey_size)
+{
+ struct nvmet_ctrl *ctrl = req->sq->ctrl;
+ int ret;
+
+ req->sq->dhchap_skey_len =
+ nvme_auth_dhgroup_privkey_size(ctrl->dh_gid);
+ req->sq->dhchap_skey = kzalloc(req->sq->dhchap_skey_len, GFP_KERNEL);
+ if (!req->sq->dhchap_skey)
+ return -ENOMEM;
+ ret = nvme_auth_gen_shared_secret(ctrl->dh_tfm,
+ pkey, pkey_size,
+ req->sq->dhchap_skey,
+ req->sq->dhchap_skey_len);
+ if (ret)
+ pr_debug("failed to compute shared secred, err %d\n", ret);
+ else
+ pr_debug("%s: shared secret %*ph\n", __func__,
+ (int)req->sq->dhchap_skey_len,
+ req->sq->dhchap_skey);
+
+ return ret;
+}
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index 7c13810a637f..4aa554982995 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -1713,9 +1713,40 @@ static ssize_t nvmet_host_dhchap_hash_store(struct config_item *item,

CONFIGFS_ATTR(nvmet_host_, dhchap_hash);

+static ssize_t nvmet_host_dhchap_dhgroup_show(struct config_item *item,
+ char *page)
+{
+ struct nvmet_host *host = to_host(item);
+ const char *dhgroup = nvme_auth_dhgroup_name(host->dhchap_dhgroup_id);
+
+ return sprintf(page, "%s\n", dhgroup ? dhgroup : "none");
+}
+
+static ssize_t nvmet_host_dhchap_dhgroup_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_host *host = to_host(item);
+ int dhgroup_id;
+
+ dhgroup_id = nvme_auth_dhgroup_id(page);
+ if (dhgroup_id < 0)
+ return -EINVAL;
+ if (dhgroup_id != NVME_AUTH_DHCHAP_DHGROUP_NULL) {
+ const char *kpp = nvme_auth_dhgroup_kpp(dhgroup_id);
+
+ if (!crypto_has_kpp(kpp, 0, 0))
+ return -EINVAL;
+ }
+ host->dhchap_dhgroup_id = dhgroup_id;
+ return count;
+}
+
+CONFIGFS_ATTR(nvmet_host_, dhchap_dhgroup);
+
static struct configfs_attribute *nvmet_host_attrs[] = {
&nvmet_host_attr_dhchap_key,
&nvmet_host_attr_dhchap_hash,
+ &nvmet_host_attr_dhchap_dhgroup,
NULL,
};
#endif /* CONFIG_NVME_TARGET_AUTH */
diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
index ab9dfc06bac0..2f1b95098917 100644
--- a/drivers/nvme/target/fabrics-cmd-auth.c
+++ b/drivers/nvme/target/fabrics-cmd-auth.c
@@ -64,13 +64,24 @@ static u16 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
null_dh = dhgid;
continue;
}
+ if (ctrl->dh_tfm && ctrl->dh_gid == dhgid) {
+ pr_debug("%s: ctrl %d qid %d: reusing existing DH group %d\n",
+ __func__, ctrl->cntlid, req->sq->qid, dhgid);
+ break;
+ }
+ if (nvmet_setup_dhgroup(ctrl, dhgid) < 0)
+ continue;
+ if (nvme_auth_gen_privkey(ctrl->dh_tfm, dhgid) == 0)
+ break;
+ crypto_free_kpp(ctrl->dh_tfm);
+ ctrl->dh_tfm = NULL;
+ ctrl->dh_gid = 0;
}
- if (null_dh < 0) {
+ if (!ctrl->dh_tfm && null_dh < 0) {
pr_debug("%s: ctrl %d qid %d: no DH group selected\n",
__func__, ctrl->cntlid, req->sq->qid);
return NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
}
- ctrl->dh_gid = null_dh;
pr_debug("%s: ctrl %d qid %d: DH group %s (%d)\n",
__func__, ctrl->cntlid, req->sq->qid,
nvme_auth_dhgroup_name(ctrl->dh_gid), ctrl->dh_gid);
@@ -91,7 +102,11 @@ static u16 nvmet_auth_reply(struct nvmet_req *req, void *d)
return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;

if (data->dhvlen) {
- return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+ if (!ctrl->dh_tfm)
+ return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+ if (nvmet_auth_ctrl_sesskey(req, data->rval + 2 * data->hl,
+ data->dhvlen) < 0)
+ return NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
}

response = kmalloc(data->hl, GFP_KERNEL);
@@ -232,6 +247,7 @@ void nvmet_execute_auth_send(struct nvmet_req *req)
NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
goto done_kfree;
}
+
switch (data->auth_id) {
case NVME_AUTH_DHCHAP_MESSAGE_REPLY:
status = nvmet_auth_reply(req, d);
@@ -303,6 +319,8 @@ static int nvmet_auth_challenge(struct nvmet_req *req, void *d, int al)
int hash_len = crypto_shash_digestsize(ctrl->shash_tfm);
int data_size = sizeof(*d) + hash_len;

+ if (ctrl->dh_tfm)
+ data_size += ctrl->dh_keysize;
if (al < data_size) {
pr_debug("%s: buffer too small (al %d need %d)\n", __func__,
al, data_size);
@@ -321,6 +339,12 @@ static int nvmet_auth_challenge(struct nvmet_req *req, void *d, int al)
return -ENOMEM;
get_random_bytes(req->sq->dhchap_c1, data->hl);
memcpy(data->cval, req->sq->dhchap_c1, data->hl);
+ if (ctrl->dh_tfm) {
+ data->dhgid = ctrl->dh_gid;
+ data->dhvlen = ctrl->dh_keysize;
+ ret = nvmet_auth_ctrl_exponential(req, data->cval + data->hl,
+ data->dhvlen);
+ }
pr_debug("%s: ctrl %d qid %d seq %d transaction %d hl %d dhvlen %d\n",
__func__, ctrl->cntlid, req->sq->qid, req->sq->dhchap_s1,
req->sq->dhchap_tid, data->hl, data->dhvlen);
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index ab25f9e18027..d0849404f398 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -227,6 +227,7 @@ struct nvmet_ctrl {
size_t dhchap_key_len;
struct crypto_shash *shash_tfm;
u8 shash_id;
+ struct crypto_kpp *dh_tfm;
u32 dh_gid;
u32 dh_keysize;
#endif
@@ -694,6 +695,7 @@ int nvmet_setup_auth(struct nvmet_ctrl *ctrl);
void nvmet_init_auth(struct nvmet_ctrl *ctrl, struct nvmet_req *req);
void nvmet_destroy_auth(struct nvmet_ctrl *ctrl);
void nvmet_auth_sq_free(struct nvmet_sq *sq);
+int nvmet_setup_dhgroup(struct nvmet_ctrl *ctrl, int dhgroup_id);
bool nvmet_check_auth_status(struct nvmet_req *req);
int nvmet_auth_host_hash(struct nvmet_req *req, u8 *response,
unsigned int hash_len);
@@ -703,6 +705,10 @@ static inline bool nvmet_has_auth(struct nvmet_ctrl *ctrl)
{
return ctrl->shash_tfm != NULL;
}
+int nvmet_auth_ctrl_exponential(struct nvmet_req *req,
+ u8 *buf, int buf_size);
+int nvmet_auth_ctrl_sesskey(struct nvmet_req *req,
+ u8 *buf, int buf_size);
#else
static inline int nvmet_setup_auth(struct nvmet_ctrl *ctrl)
{
--
2.29.2

2021-09-10 06:44:59

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 07/12] nvme: Implement In-Band authentication

Implement NVMe-oF In-Band authentication according to NVMe TPAR 8006.
This patch adds two new fabric options 'dhchap_secret' to specify the
pre-shared key (in ASCII respresentation according to NVMe 2.0 section
8.13.5.8 'Secret representation') and 'dhchap_bidi' to request bi-directional
authentication of both the host and the controller.
Re-authentication can be triggered by writing the PSK into the new
controller sysfs attribute 'dhchap_secret'.

Signed-off-by: Hannes Reinecke <[email protected]>
---
drivers/nvme/host/Kconfig | 12 +
drivers/nvme/host/Makefile | 1 +
drivers/nvme/host/auth.c | 1285 +++++++++++++++++++++++++++++++++++
drivers/nvme/host/auth.h | 25 +
drivers/nvme/host/core.c | 79 ++-
drivers/nvme/host/fabrics.c | 73 +-
drivers/nvme/host/fabrics.h | 6 +
drivers/nvme/host/nvme.h | 30 +
drivers/nvme/host/trace.c | 32 +
9 files changed, 1537 insertions(+), 6 deletions(-)
create mode 100644 drivers/nvme/host/auth.c
create mode 100644 drivers/nvme/host/auth.h

diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig
index dc0450ca23a3..97e8412dc42d 100644
--- a/drivers/nvme/host/Kconfig
+++ b/drivers/nvme/host/Kconfig
@@ -83,3 +83,15 @@ config NVME_TCP
from https://github.com/linux-nvme/nvme-cli.

If unsure, say N.
+
+config NVME_AUTH
+ bool "NVM Express over Fabrics In-Band Authentication"
+ depends on NVME_CORE
+ select CRYPTO_HMAC
+ select CRYPTO_SHA256
+ select CRYPTO_SHA512
+ help
+ This provides support for NVMe over Fabrics In-Band Authentication
+ for the NVMe over TCP transport.
+
+ If unsure, say N.
diff --git a/drivers/nvme/host/Makefile b/drivers/nvme/host/Makefile
index dfaacd472e5d..4bae2a4a8d8c 100644
--- a/drivers/nvme/host/Makefile
+++ b/drivers/nvme/host/Makefile
@@ -15,6 +15,7 @@ nvme-core-$(CONFIG_NVME_MULTIPATH) += multipath.o
nvme-core-$(CONFIG_BLK_DEV_ZONED) += zns.o
nvme-core-$(CONFIG_FAULT_INJECTION_DEBUG_FS) += fault_inject.o
nvme-core-$(CONFIG_NVME_HWMON) += hwmon.o
+nvme-core-$(CONFIG_NVME_AUTH) += auth.o

nvme-y += pci.o

diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
new file mode 100644
index 000000000000..5393ac16a002
--- /dev/null
+++ b/drivers/nvme/host/auth.c
@@ -0,0 +1,1285 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2020 Hannes Reinecke, SUSE Linux
+ */
+
+#include <linux/crc32.h>
+#include <linux/base64.h>
+#include <asm/unaligned.h>
+#include <crypto/hash.h>
+#include <crypto/dh.h>
+#include <crypto/ffdhe.h>
+#include "nvme.h"
+#include "fabrics.h"
+#include "auth.h"
+
+static u32 nvme_dhchap_seqnum;
+
+struct nvme_dhchap_queue_context {
+ struct list_head entry;
+ struct work_struct auth_work;
+ struct nvme_ctrl *ctrl;
+ struct crypto_shash *shash_tfm;
+ struct crypto_kpp *dh_tfm;
+ void *buf;
+ size_t buf_size;
+ int qid;
+ int error;
+ u32 s1;
+ u32 s2;
+ u16 transaction;
+ u8 status;
+ u8 hash_id;
+ u8 hash_len;
+ u8 dhgroup_id;
+ u8 c1[64];
+ u8 c2[64];
+ u8 response[64];
+ u8 *host_response;
+};
+
+static struct nvme_auth_dhgroup_map {
+ int id;
+ const char name[16];
+ const char kpp[16];
+ int privkey_size;
+ int pubkey_size;
+} dhgroup_map[] = {
+ { .id = NVME_AUTH_DHCHAP_DHGROUP_NULL,
+ .name = "NULL", .kpp = "NULL",
+ .privkey_size = 0, .pubkey_size = 0 },
+ { .id = NVME_AUTH_DHCHAP_DHGROUP_2048,
+ .name = "ffdhe2048", .kpp = "dh",
+ .privkey_size = 256, .pubkey_size = 256 },
+ { .id = NVME_AUTH_DHCHAP_DHGROUP_3072,
+ .name = "ffdhe3072", .kpp = "dh",
+ .privkey_size = 384, .pubkey_size = 384 },
+ { .id = NVME_AUTH_DHCHAP_DHGROUP_4096,
+ .name = "ffdhe4096", .kpp = "dh",
+ .privkey_size = 512, .pubkey_size = 512 },
+ { .id = NVME_AUTH_DHCHAP_DHGROUP_6144,
+ .name = "ffdhe6144", .kpp = "dh",
+ .privkey_size = 768, .pubkey_size = 768 },
+ { .id = NVME_AUTH_DHCHAP_DHGROUP_8192,
+ .name = "ffdhe8192", .kpp = "dh",
+ .privkey_size = 1024, .pubkey_size = 1024 },
+};
+
+const char *nvme_auth_dhgroup_name(int dhgroup_id)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) {
+ if (dhgroup_map[i].id == dhgroup_id)
+ return dhgroup_map[i].name;
+ }
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_name);
+
+int nvme_auth_dhgroup_pubkey_size(int dhgroup_id)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) {
+ if (dhgroup_map[i].id == dhgroup_id)
+ return dhgroup_map[i].pubkey_size;
+ }
+ return -1;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_pubkey_size);
+
+int nvme_auth_dhgroup_privkey_size(int dhgroup_id)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) {
+ if (dhgroup_map[i].id == dhgroup_id)
+ return dhgroup_map[i].privkey_size;
+ }
+ return -1;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_privkey_size);
+
+const char *nvme_auth_dhgroup_kpp(int dhgroup_id)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) {
+ if (dhgroup_map[i].id == dhgroup_id)
+ return dhgroup_map[i].kpp;
+ }
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_kpp);
+
+int nvme_auth_dhgroup_id(const char *dhgroup_name)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) {
+ if (!strncmp(dhgroup_map[i].name, dhgroup_name,
+ strlen(dhgroup_map[i].name)))
+ return dhgroup_map[i].id;
+ }
+ return -1;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_id);
+
+static struct nvme_dhchap_hash_map {
+ int id;
+ const char hmac[15];
+ const char digest[15];
+} hash_map[] = {
+ {.id = NVME_AUTH_DHCHAP_SHA256,
+ .hmac = "hmac(sha256)", .digest = "sha256" },
+ {.id = NVME_AUTH_DHCHAP_SHA384,
+ .hmac = "hmac(sha384)", .digest = "sha384" },
+ {.id = NVME_AUTH_DHCHAP_SHA512,
+ .hmac = "hmac(sha512)", .digest = "sha512" },
+};
+
+const char *nvme_auth_hmac_name(int hmac_id)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(hash_map); i++) {
+ if (hash_map[i].id == hmac_id)
+ return hash_map[i].hmac;
+ }
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_hmac_name);
+
+const char *nvme_auth_digest_name(int hmac_id)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(hash_map); i++) {
+ if (hash_map[i].id == hmac_id)
+ return hash_map[i].digest;
+ }
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_digest_name);
+
+int nvme_auth_hmac_id(const char *hmac_name)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(hash_map); i++) {
+ if (!strncmp(hash_map[i].hmac, hmac_name,
+ strlen(hash_map[i].hmac)))
+ return hash_map[i].id;
+ }
+ return -1;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_hmac_id);
+
+unsigned char *nvme_auth_extract_secret(unsigned char *secret, size_t *out_len)
+{
+ unsigned char *key;
+ u32 crc;
+ int key_len;
+ size_t allocated_len;
+
+ allocated_len = strlen(secret);
+ key = kzalloc(allocated_len, GFP_KERNEL);
+ if (!key)
+ return ERR_PTR(-ENOMEM);
+
+ key_len = base64_decode(secret, allocated_len, key);
+ if (key_len != 36 && key_len != 52 &&
+ key_len != 68) {
+ pr_debug("Invalid DH-HMAC-CHAP key len %d\n",
+ key_len);
+ kfree_sensitive(key);
+ return ERR_PTR(-EINVAL);
+ }
+
+ /* The last four bytes is the CRC in little-endian format */
+ key_len -= 4;
+ /*
+ * The linux implementation doesn't do pre- and post-increments,
+ * so we have to do it manually.
+ */
+ crc = ~crc32(~0, key, key_len);
+
+ if (get_unaligned_le32(key + key_len) != crc) {
+ pr_debug("DH-HMAC-CHAP key crc mismatch (key %08x, crc %08x)\n",
+ get_unaligned_le32(key + key_len), crc);
+ kfree_sensitive(key);
+ return ERR_PTR(-EKEYREJECTED);
+ }
+ *out_len = key_len;
+ return key;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_extract_secret);
+
+u8 *nvme_auth_transform_key(u8 *key, size_t key_len, u8 key_hash, char *nqn)
+{
+ const char *hmac_name = nvme_auth_hmac_name(key_hash);
+ struct crypto_shash *key_tfm;
+ struct shash_desc *shash;
+ u8 *transformed_key;
+ int ret;
+
+ /* No key transformation required */
+ if (key_hash == 0)
+ return 0;
+
+ hmac_name = nvme_auth_hmac_name(key_hash);
+ if (!hmac_name) {
+ pr_warn("Invalid key hash id %d\n", key_hash);
+ return ERR_PTR(-EKEYREJECTED);
+ }
+ key_tfm = crypto_alloc_shash(hmac_name, 0, 0);
+ if (IS_ERR(key_tfm))
+ return (u8 *)key_tfm;
+
+ shash = kmalloc(sizeof(struct shash_desc) +
+ crypto_shash_descsize(key_tfm),
+ GFP_KERNEL);
+ if (!shash) {
+ crypto_free_shash(key_tfm);
+ return ERR_PTR(-ENOMEM);
+ }
+ transformed_key = kzalloc(crypto_shash_digestsize(key_tfm), GFP_KERNEL);
+ if (!transformed_key) {
+ ret = -ENOMEM;
+ goto out_free_shash;
+ }
+
+ shash->tfm = key_tfm;
+ ret = crypto_shash_setkey(key_tfm, key, key_len);
+ if (ret < 0)
+ goto out_free_shash;
+ ret = crypto_shash_init(shash);
+ if (ret < 0)
+ goto out_free_shash;
+ ret = crypto_shash_update(shash, nqn, strlen(nqn));
+ if (ret < 0)
+ goto out_free_shash;
+ ret = crypto_shash_update(shash, "NVMe-over-Fabrics", 17);
+ if (ret < 0)
+ goto out_free_shash;
+ ret = crypto_shash_final(shash, transformed_key);
+out_free_shash:
+ kfree(shash);
+ crypto_free_shash(key_tfm);
+ if (ret < 0) {
+ kfree_sensitive(transformed_key);
+ return ERR_PTR(ret);
+ }
+ return transformed_key;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_transform_key);
+
+static int nvme_auth_hash_skey(int hmac_id, u8 *skey, size_t skey_len, u8 *hkey)
+{
+ const char *digest_name;
+ struct crypto_shash *tfm;
+ int ret;
+
+ digest_name = nvme_auth_digest_name(hmac_id);
+ if (!digest_name) {
+ pr_debug("%s: failed to get digest for %d\n", __func__,
+ hmac_id);
+ return -EINVAL;
+ }
+ tfm = crypto_alloc_shash(digest_name, 0, 0);
+ if (IS_ERR(tfm))
+ return -ENOMEM;
+
+ ret = crypto_shash_tfm_digest(tfm, skey, skey_len, hkey);
+ if (ret < 0)
+ pr_debug("%s: Failed to hash digest len %zu\n", __func__,
+ skey_len);
+
+ crypto_free_shash(tfm);
+ return ret;
+}
+
+int nvme_auth_augmented_challenge(u8 hmac_id, u8 *skey, size_t skey_len,
+ u8 *challenge, u8 *aug, size_t hlen)
+{
+ struct crypto_shash *tfm;
+ struct shash_desc *desc;
+ u8 *hashed_key;
+ const char *hmac_name;
+ int ret;
+
+ hashed_key = kmalloc(hlen, GFP_KERNEL);
+ if (!hashed_key)
+ return -ENOMEM;
+
+ ret = nvme_auth_hash_skey(hmac_id, skey,
+ skey_len, hashed_key);
+ if (ret < 0)
+ goto out_free_key;
+
+ hmac_name = nvme_auth_hmac_name(hmac_id);
+ if (!hmac_name) {
+ pr_warn("%s: invalid hash algoritm %d\n",
+ __func__, hmac_id);
+ ret = -EINVAL;
+ goto out_free_key;
+ }
+ tfm = crypto_alloc_shash(hmac_name, 0, 0);
+ if (IS_ERR(tfm)) {
+ ret = PTR_ERR(tfm);
+ goto out_free_key;
+ }
+ desc = kmalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm),
+ GFP_KERNEL);
+ if (!desc) {
+ ret = -ENOMEM;
+ goto out_free_hash;
+ }
+ desc->tfm = tfm;
+
+ ret = crypto_shash_setkey(tfm, hashed_key, hlen);
+ if (ret)
+ goto out_free_desc;
+
+ ret = crypto_shash_init(desc);
+ if (ret)
+ goto out_free_desc;
+
+ ret = crypto_shash_update(desc, challenge, hlen);
+ if (ret)
+ goto out_free_desc;
+
+ ret = crypto_shash_final(desc, aug);
+out_free_desc:
+ kfree_sensitive(desc);
+out_free_hash:
+ crypto_free_shash(tfm);
+out_free_key:
+ kfree_sensitive(hashed_key);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_augmented_challenge);
+
+int nvme_auth_gen_privkey(struct crypto_kpp *dh_tfm, int dh_gid)
+{
+ char *pkey;
+ int ret, pkey_len;
+
+ if (dh_gid == NVME_AUTH_DHCHAP_DHGROUP_2048 ||
+ dh_gid == NVME_AUTH_DHCHAP_DHGROUP_3072 ||
+ dh_gid == NVME_AUTH_DHCHAP_DHGROUP_4096 ||
+ dh_gid == NVME_AUTH_DHCHAP_DHGROUP_6144 ||
+ dh_gid == NVME_AUTH_DHCHAP_DHGROUP_8192) {
+ struct dh p = {0};
+ int bits = nvme_auth_dhgroup_pubkey_size(dh_gid) << 3;
+ int dh_secret_len = 64;
+ u8 *dh_secret = kzalloc(dh_secret_len, GFP_KERNEL);
+
+ if (!dh_secret)
+ return -ENOMEM;
+
+ /*
+ * NVMe base spec v2.0: The DH value shall be set to the value
+ * of g^x mod p, where 'x' is a random number selected by the
+ * host that shall be at least 256 bits long.
+ *
+ * We will be using a 512 bit random number as private key.
+ * This is large enough to provide adequate security, but
+ * small enough such that we can trivially conform to
+ * NIST SB800-56A section 5.6.1.1.4 if
+ * we guarantee that the random number is not either
+ * all 0xff or all 0x00. But that should be guaranteed
+ * by the in-kernel RNG anyway.
+ */
+ get_random_bytes(dh_secret, dh_secret_len);
+
+ ret = crypto_ffdhe_params(&p, bits);
+ if (ret) {
+ kfree_sensitive(dh_secret);
+ return ret;
+ }
+
+ p.key = dh_secret;
+ p.key_size = dh_secret_len;
+
+ pkey_len = crypto_dh_key_len(&p);
+ pkey = kmalloc(pkey_len, GFP_KERNEL);
+ if (!pkey) {
+ kfree_sensitive(dh_secret);
+ return -ENOMEM;
+ }
+
+ get_random_bytes(pkey, pkey_len);
+ ret = crypto_dh_encode_key(pkey, pkey_len, &p);
+ if (ret) {
+ pr_debug("failed to encode private key, error %d\n",
+ ret);
+ kfree_sensitive(dh_secret);
+ goto out;
+ }
+ } else {
+ pr_warn("invalid dh group %d\n", dh_gid);
+ return -EINVAL;
+ }
+ ret = crypto_kpp_set_secret(dh_tfm, pkey, pkey_len);
+ if (ret)
+ pr_debug("failed to set private key, error %d\n", ret);
+out:
+ kfree_sensitive(pkey);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_gen_privkey);
+
+int nvme_auth_gen_pubkey(struct crypto_kpp *dh_tfm,
+ u8 *host_key, size_t host_key_len)
+{
+ struct kpp_request *req;
+ struct crypto_wait wait;
+ struct scatterlist dst;
+ int ret;
+
+ req = kpp_request_alloc(dh_tfm, GFP_KERNEL);
+ if (!req)
+ return -ENOMEM;
+
+ crypto_init_wait(&wait);
+ kpp_request_set_input(req, NULL, 0);
+ sg_init_one(&dst, host_key, host_key_len);
+ kpp_request_set_output(req, &dst, host_key_len);
+ kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+ crypto_req_done, &wait);
+
+ ret = crypto_wait_req(crypto_kpp_generate_public_key(req), &wait);
+
+ kpp_request_free(req);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_gen_pubkey);
+
+int nvme_auth_gen_shared_secret(struct crypto_kpp *dh_tfm,
+ u8 *ctrl_key, size_t ctrl_key_len,
+ u8 *sess_key, size_t sess_key_len)
+{
+ struct kpp_request *req;
+ struct crypto_wait wait;
+ struct scatterlist src, dst;
+ int ret;
+
+ req = kpp_request_alloc(dh_tfm, GFP_KERNEL);
+ if (!req)
+ return -ENOMEM;
+
+ crypto_init_wait(&wait);
+ sg_init_one(&src, ctrl_key, ctrl_key_len);
+ kpp_request_set_input(req, &src, ctrl_key_len);
+ sg_init_one(&dst, sess_key, sess_key_len);
+ kpp_request_set_output(req, &dst, sess_key_len);
+ kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+ crypto_req_done, &wait);
+
+ ret = crypto_wait_req(crypto_kpp_compute_shared_secret(req), &wait);
+
+ kpp_request_free(req);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_gen_shared_secret);
+
+static int nvme_auth_send(struct nvme_ctrl *ctrl, int qid,
+ void *data, size_t tl)
+{
+ struct nvme_command cmd = {};
+ blk_mq_req_flags_t flags = qid == NVME_QID_ANY ?
+ 0 : BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED;
+ struct request_queue *q = qid == NVME_QID_ANY ?
+ ctrl->fabrics_q : ctrl->connect_q;
+ int ret;
+
+ cmd.auth_send.opcode = nvme_fabrics_command;
+ cmd.auth_send.fctype = nvme_fabrics_type_auth_send;
+ cmd.auth_send.secp = NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER;
+ cmd.auth_send.spsp0 = 0x01;
+ cmd.auth_send.spsp1 = 0x01;
+ cmd.auth_send.tl = tl;
+
+ ret = __nvme_submit_sync_cmd(q, &cmd, NULL, data, tl, 0, qid,
+ 0, flags);
+ if (ret > 0)
+ dev_dbg(ctrl->device,
+ "%s: qid %d nvme status %d\n", __func__, qid, ret);
+ else if (ret < 0)
+ dev_dbg(ctrl->device,
+ "%s: qid %d error %d\n", __func__, qid, ret);
+ return ret;
+}
+
+static int nvme_auth_receive(struct nvme_ctrl *ctrl, int qid,
+ void *buf, size_t al)
+{
+ struct nvme_command cmd = {};
+ blk_mq_req_flags_t flags = qid == NVME_QID_ANY ?
+ 0 : BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED;
+ struct request_queue *q = qid == NVME_QID_ANY ?
+ ctrl->fabrics_q : ctrl->connect_q;
+ int ret;
+
+ cmd.auth_receive.opcode = nvme_fabrics_command;
+ cmd.auth_receive.fctype = nvme_fabrics_type_auth_receive;
+ cmd.auth_receive.secp = NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER;
+ cmd.auth_receive.spsp0 = 0x01;
+ cmd.auth_receive.spsp1 = 0x01;
+ cmd.auth_receive.al = al;
+
+ ret = __nvme_submit_sync_cmd(q, &cmd, NULL, buf, al, 0, qid,
+ 0, flags);
+ if (ret > 0) {
+ dev_dbg(ctrl->device, "%s: qid %d nvme status %x\n",
+ __func__, qid, ret);
+ ret = -EIO;
+ }
+ if (ret < 0) {
+ dev_dbg(ctrl->device, "%s: qid %d error %d\n",
+ __func__, qid, ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int nvme_auth_receive_validate(struct nvme_ctrl *ctrl, int qid,
+ struct nvmf_auth_dhchap_failure_data *data,
+ u16 transaction, u8 expected_msg)
+{
+ dev_dbg(ctrl->device, "%s: qid %d auth_type %d auth_id %x\n",
+ __func__, qid, data->auth_type, data->auth_id);
+
+ if (data->auth_type == NVME_AUTH_COMMON_MESSAGES &&
+ data->auth_id == NVME_AUTH_DHCHAP_MESSAGE_FAILURE1) {
+ return data->rescode_exp;
+ }
+ if (data->auth_type != NVME_AUTH_DHCHAP_MESSAGES ||
+ data->auth_id != expected_msg) {
+ dev_warn(ctrl->device,
+ "qid %d invalid message %02x/%02x\n",
+ qid, data->auth_type, data->auth_id);
+ return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_MESSAGE;
+ }
+ if (le16_to_cpu(data->t_id) != transaction) {
+ dev_warn(ctrl->device,
+ "qid %d invalid transaction ID %d\n",
+ qid, le16_to_cpu(data->t_id));
+ return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_MESSAGE;
+ }
+ return 0;
+}
+
+static int nvme_auth_set_dhchap_negotiate_data(struct nvme_ctrl *ctrl,
+ struct nvme_dhchap_queue_context *chap)
+{
+ struct nvmf_auth_dhchap_negotiate_data *data = chap->buf;
+ size_t size = sizeof(*data) + sizeof(union nvmf_auth_protocol);
+
+ if (chap->buf_size < size) {
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+ return -EINVAL;
+ }
+ memset((u8 *)chap->buf, 0, size);
+ data->auth_type = NVME_AUTH_COMMON_MESSAGES;
+ data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE;
+ data->t_id = cpu_to_le16(chap->transaction);
+ data->sc_c = 0; /* No secure channel concatenation */
+ data->napd = 1;
+ data->auth_protocol[0].dhchap.authid = NVME_AUTH_DHCHAP_AUTH_ID;
+ data->auth_protocol[0].dhchap.halen = 3;
+ data->auth_protocol[0].dhchap.dhlen = 6;
+ data->auth_protocol[0].dhchap.idlist[0] = NVME_AUTH_DHCHAP_SHA256;
+ data->auth_protocol[0].dhchap.idlist[1] = NVME_AUTH_DHCHAP_SHA384;
+ data->auth_protocol[0].dhchap.idlist[2] = NVME_AUTH_DHCHAP_SHA512;
+ data->auth_protocol[0].dhchap.idlist[3] = NVME_AUTH_DHCHAP_DHGROUP_NULL;
+ data->auth_protocol[0].dhchap.idlist[4] = NVME_AUTH_DHCHAP_DHGROUP_2048;
+ data->auth_protocol[0].dhchap.idlist[5] = NVME_AUTH_DHCHAP_DHGROUP_3072;
+ data->auth_protocol[0].dhchap.idlist[6] = NVME_AUTH_DHCHAP_DHGROUP_4096;
+ data->auth_protocol[0].dhchap.idlist[7] = NVME_AUTH_DHCHAP_DHGROUP_6144;
+ data->auth_protocol[0].dhchap.idlist[8] = NVME_AUTH_DHCHAP_DHGROUP_8192;
+
+ return size;
+}
+
+static int nvme_auth_process_dhchap_challenge(struct nvme_ctrl *ctrl,
+ struct nvme_dhchap_queue_context *chap)
+{
+ struct nvmf_auth_dhchap_challenge_data *data = chap->buf;
+ size_t size = sizeof(*data) + data->hl + data->dhvlen;
+ const char *hmac_name;
+
+ if (chap->buf_size < size) {
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+ return NVME_SC_INVALID_FIELD;
+ }
+
+ hmac_name = nvme_auth_hmac_name(data->hashid);
+ if (!hmac_name) {
+ dev_warn(ctrl->device,
+ "qid %d: invalid HASH ID %d\n",
+ chap->qid, data->hashid);
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE;
+ return -EPROTO;
+ }
+ if (chap->hash_id == data->hashid && chap->shash_tfm &&
+ !strcmp(crypto_shash_alg_name(chap->shash_tfm), hmac_name) &&
+ crypto_shash_digestsize(chap->shash_tfm) == data->hl) {
+ dev_dbg(ctrl->device,
+ "qid %d: reuse existing hash %s\n",
+ chap->qid, hmac_name);
+ goto select_kpp;
+ }
+ if (chap->shash_tfm) {
+ crypto_free_shash(chap->shash_tfm);
+ chap->hash_id = 0;
+ chap->hash_len = 0;
+ }
+ chap->shash_tfm = crypto_alloc_shash(hmac_name, 0,
+ CRYPTO_ALG_ALLOCATES_MEMORY);
+ if (IS_ERR(chap->shash_tfm)) {
+ dev_warn(ctrl->device,
+ "qid %d: failed to allocate hash %s, error %ld\n",
+ chap->qid, hmac_name, PTR_ERR(chap->shash_tfm));
+ chap->shash_tfm = NULL;
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_FAILED;
+ return NVME_SC_AUTH_REQUIRED;
+ }
+ if (crypto_shash_digestsize(chap->shash_tfm) != data->hl) {
+ dev_warn(ctrl->device,
+ "qid %d: invalid hash length %d\n",
+ chap->qid, data->hl);
+ crypto_free_shash(chap->shash_tfm);
+ chap->shash_tfm = NULL;
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE;
+ return NVME_SC_AUTH_REQUIRED;
+ }
+ if (chap->hash_id != data->hashid) {
+ kfree(chap->host_response);
+ chap->host_response = NULL;
+ }
+ chap->hash_id = data->hashid;
+ chap->hash_len = data->hl;
+ dev_dbg(ctrl->device, "qid %d: selected hash %s\n",
+ chap->qid, hmac_name);
+
+ gid_name = nvme_auth_dhgroup_kpp(data->dhgid);
+ if (!gid_name) {
+ dev_warn(ctrl->device,
+ "qid %d: invalid DH group id %d\n",
+ chap->qid, data->dhgid);
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
+ return -EPROTO;
+ }
+
+ if (data->dhgid != NVME_AUTH_DHCHAP_DHGROUP_NULL) {
+ if (data->dhvlen == 0) {
+ dev_warn(ctrl->device,
+ "qid %d: empty DH value\n",
+ chap->qid);
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
+ return -EPROTO;
+ }
+ chap->dh_tfm = crypto_alloc_kpp(gid_name, 0, 0);
+ if (IS_ERR(chap->dh_tfm)) {
+ int ret = PTR_ERR(chap->dh_tfm);
+
+ dev_warn(ctrl->device,
+ "qid %d: failed to initialize %s\n",
+ chap->qid, gid_name);
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
+ chap->dh_tfm = NULL;
+ return ret;
+ }
+ chap->dhgroup_id = data->dhgid;
+ } else if (data->dhvlen != 0) {
+ dev_warn(ctrl->device,
+ "qid %d: invalid DH value for NULL DH\n",
+ chap->qid);
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
+ return -EPROTO;
+ }
+ dev_dbg(ctrl->device, "qid %d: selected DH group %s\n",
+ chap->qid, gid_name);
+
+select_kpp:
+ chap->s1 = le32_to_cpu(data->seqnum);
+ memcpy(chap->c1, data->cval, chap->hash_len);
+
+ return 0;
+}
+
+static int nvme_auth_set_dhchap_reply_data(struct nvme_ctrl *ctrl,
+ struct nvme_dhchap_queue_context *chap)
+{
+ struct nvmf_auth_dhchap_reply_data *data = chap->buf;
+ size_t size = sizeof(*data);
+
+ size += 2 * chap->hash_len;
+ if (ctrl->opts->dhchap_bidi) {
+ get_random_bytes(chap->c2, chap->hash_len);
+ chap->s2 = nvme_dhchap_seqnum++;
+ } else
+ memset(chap->c2, 0, chap->hash_len);
+
+
+ if (chap->buf_size < size) {
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+ return -EINVAL;
+ }
+ memset(chap->buf, 0, size);
+ data->auth_type = NVME_AUTH_DHCHAP_MESSAGES;
+ data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_REPLY;
+ data->t_id = cpu_to_le16(chap->transaction);
+ data->hl = chap->hash_len;
+ data->dhvlen = 0;
+ data->seqnum = cpu_to_le32(chap->s2);
+ memcpy(data->rval, chap->response, chap->hash_len);
+ if (ctrl->opts->dhchap_bidi) {
+ dev_dbg(ctrl->device, "%s: qid %d ctrl challenge %*ph\n",
+ __func__, chap->qid,
+ chap->hash_len, chap->c2);
+ data->cvalid = 1;
+ memcpy(data->rval + chap->hash_len, chap->c2,
+ chap->hash_len);
+ }
+ return size;
+}
+
+static int nvme_auth_process_dhchap_success1(struct nvme_ctrl *ctrl,
+ struct nvme_dhchap_queue_context *chap)
+{
+ struct nvmf_auth_dhchap_success1_data *data = chap->buf;
+ size_t size = sizeof(*data);
+
+ if (ctrl->opts->dhchap_bidi)
+ size += chap->hash_len;
+
+
+ if (chap->buf_size < size) {
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+ return NVME_SC_INVALID_FIELD;
+ }
+
+ if (data->hl != chap->hash_len) {
+ dev_warn(ctrl->device,
+ "qid %d: invalid hash length %d\n",
+ chap->qid, data->hl);
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE;
+ return NVME_SC_INVALID_FIELD;
+ }
+
+ if (!data->rvalid)
+ return 0;
+
+ /* Validate controller response */
+ if (memcmp(chap->response, data->rval, data->hl)) {
+ dev_dbg(ctrl->device, "%s: qid %d ctrl response %*ph\n",
+ __func__, chap->qid, chap->hash_len, data->rval);
+ dev_dbg(ctrl->device, "%s: qid %d host response %*ph\n",
+ __func__, chap->qid, chap->hash_len, chap->response);
+ dev_warn(ctrl->device,
+ "qid %d: controller authentication failed\n",
+ chap->qid);
+ chap->status = NVME_AUTH_DHCHAP_FAILURE_FAILED;
+ return NVME_SC_AUTH_REQUIRED;
+ }
+ dev_info(ctrl->device,
+ "qid %d: controller authenticated\n",
+ chap->qid);
+ return 0;
+}
+
+static int nvme_auth_set_dhchap_success2_data(struct nvme_ctrl *ctrl,
+ struct nvme_dhchap_queue_context *chap)
+{
+ struct nvmf_auth_dhchap_success2_data *data = chap->buf;
+ size_t size = sizeof(*data);
+
+ memset(chap->buf, 0, size);
+ data->auth_type = NVME_AUTH_DHCHAP_MESSAGES;
+ data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2;
+ data->t_id = cpu_to_le16(chap->transaction);
+
+ return size;
+}
+
+static int nvme_auth_set_dhchap_failure2_data(struct nvme_ctrl *ctrl,
+ struct nvme_dhchap_queue_context *chap)
+{
+ struct nvmf_auth_dhchap_failure_data *data = chap->buf;
+ size_t size = sizeof(*data);
+
+ memset(chap->buf, 0, size);
+ data->auth_type = NVME_AUTH_DHCHAP_MESSAGES;
+ data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_FAILURE2;
+ data->t_id = cpu_to_le16(chap->transaction);
+ data->rescode = NVME_AUTH_DHCHAP_FAILURE_REASON_FAILED;
+ data->rescode_exp = chap->status;
+
+ return size;
+}
+
+static int nvme_auth_dhchap_host_response(struct nvme_ctrl *ctrl,
+ struct nvme_dhchap_queue_context *chap)
+{
+ SHASH_DESC_ON_STACK(shash, chap->shash_tfm);
+ u8 buf[4], *challenge = chap->c1;
+ int ret;
+
+ dev_dbg(ctrl->device, "%s: qid %d host response seq %d transaction %d\n",
+ __func__, chap->qid, chap->s1, chap->transaction);
+ if (chap->dh_tfm) {
+ challenge = kmalloc(chap->hash_len, GFP_KERNEL);
+ if (!challenge) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ ret = nvme_auth_augmented_challenge(chap->hash_id,
+ chap->sess_key,
+ chap->sess_key_len,
+ chap->c1, challenge,
+ chap->hash_len);
+ if (ret)
+ goto out;
+ }
+ shash->tfm = chap->shash_tfm;
+ ret = crypto_shash_init(shash);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, challenge, chap->hash_len);
+ if (ret)
+ goto out;
+ put_unaligned_le32(chap->s1, buf);
+ ret = crypto_shash_update(shash, buf, 4);
+ if (ret)
+ goto out;
+ put_unaligned_le16(chap->transaction, buf);
+ ret = crypto_shash_update(shash, buf, 2);
+ if (ret)
+ goto out;
+ memset(buf, 0, sizeof(buf));
+ ret = crypto_shash_update(shash, buf, 1);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, "HostHost", 8);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, ctrl->opts->host->nqn,
+ strlen(ctrl->opts->host->nqn));
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, buf, 1);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, ctrl->opts->subsysnqn,
+ strlen(ctrl->opts->subsysnqn));
+ if (ret)
+ goto out;
+ ret = crypto_shash_final(shash, chap->response);
+out:
+ if (challenge != chap->c1)
+ kfree(challenge);
+ return ret;
+}
+
+static int nvme_auth_dhchap_ctrl_response(struct nvme_ctrl *ctrl,
+ struct nvme_dhchap_queue_context *chap)
+{
+ SHASH_DESC_ON_STACK(shash, chap->shash_tfm);
+ u8 buf[4], *challenge = chap->c2;
+ int ret;
+
+ if (chap->dh_tfm) {
+ challenge = kmalloc(chap->hash_len, GFP_KERNEL);
+ if (!challenge) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ ret = nvme_auth_augmented_challenge(chap->hash_id,
+ chap->sess_key,
+ chap->sess_key_len,
+ chap->c2, challenge,
+ chap->hash_len);
+ if (ret)
+ goto out;
+ }
+ dev_dbg(ctrl->device, "%s: qid %d host response seq %d transaction %d\n",
+ __func__, chap->qid, chap->s2, chap->transaction);
+ dev_dbg(ctrl->device, "%s: qid %d challenge %*ph\n",
+ __func__, chap->qid, chap->hash_len, challenge);
+ dev_dbg(ctrl->device, "%s: qid %d subsysnqn %s\n",
+ __func__, chap->qid, ctrl->opts->subsysnqn);
+ dev_dbg(ctrl->device, "%s: qid %d hostnqn %s\n",
+ __func__, chap->qid, ctrl->opts->host->nqn);
+ shash->tfm = chap->shash_tfm;
+ ret = crypto_shash_init(shash);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, challenge, chap->hash_len);
+ if (ret)
+ goto out;
+ put_unaligned_le32(chap->s2, buf);
+ ret = crypto_shash_update(shash, buf, 4);
+ if (ret)
+ goto out;
+ put_unaligned_le16(chap->transaction, buf);
+ ret = crypto_shash_update(shash, buf, 2);
+ if (ret)
+ goto out;
+ memset(buf, 0, 4);
+ ret = crypto_shash_update(shash, buf, 1);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, "Controller", 10);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, ctrl->opts->subsysnqn,
+ strlen(ctrl->opts->subsysnqn));
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, buf, 1);
+ if (ret)
+ goto out;
+ ret = crypto_shash_update(shash, ctrl->opts->host->nqn,
+ strlen(ctrl->opts->host->nqn));
+ if (ret)
+ goto out;
+ ret = crypto_shash_final(shash, chap->response);
+out:
+ if (challenge != chap->c2)
+ kfree(challenge);
+ return ret;
+}
+
+int nvme_auth_generate_key(struct nvme_ctrl *ctrl)
+{
+ int ret;
+ u8 key_hash;
+
+ if (!ctrl->opts->dhchap_secret)
+ return 0;
+
+ if (ctrl->dhchap_key && ctrl->dhchap_key_len)
+ /* Key already set */
+ return 0;
+
+ if (sscanf(ctrl->opts->dhchap_secret, "DHHC-1:%hhd:%*s:",
+ &key_hash) != 1)
+ return -EINVAL;
+
+ /* Pass in the secret without the 'DHHC-1:XX:' prefix */
+ ctrl->dhchap_key = nvme_auth_extract_secret(ctrl->opts->dhchap_secret + 10,
+ &ctrl->dhchap_key_len);
+ if (IS_ERR(ctrl->dhchap_key)) {
+ ret = PTR_ERR(ctrl->dhchap_key);
+ ctrl->dhchap_key = NULL;
+ return ret;
+ }
+ return ret;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_generate_key);
+
+static void nvme_auth_reset(struct nvme_dhchap_queue_context *chap)
+{
+ chap->status = 0;
+ chap->error = 0;
+ chap->s1 = 0;
+ chap->s2 = 0;
+ chap->transaction = 0;
+ memset(chap->c1, 0, sizeof(chap->c1));
+ memset(chap->c2, 0, sizeof(chap->c2));
+}
+
+static void __nvme_auth_free(struct nvme_dhchap_queue_context *chap)
+{
+ if (chap->shash_tfm)
+ crypto_free_shash(chap->shash_tfm);
+ kfree_sensitive(chap->host_response);
+ kfree(chap->buf);
+ kfree(chap);
+}
+
+static void __nvme_auth_work(struct work_struct *work)
+{
+ struct nvme_dhchap_queue_context *chap =
+ container_of(work, struct nvme_dhchap_queue_context, auth_work);
+ struct nvme_ctrl *ctrl = chap->ctrl;
+ size_t tl;
+ int ret = 0;
+
+ chap->transaction = ctrl->transaction++;
+
+ /* DH-HMAC-CHAP Step 1: send negotiate */
+ dev_dbg(ctrl->device, "%s: qid %d send negotiate\n",
+ __func__, chap->qid);
+ ret = nvme_auth_set_dhchap_negotiate_data(ctrl, chap);
+ if (ret < 0) {
+ chap->error = ret;
+ return;
+ }
+ tl = ret;
+ ret = nvme_auth_send(ctrl, chap->qid, chap->buf, tl);
+ if (ret) {
+ chap->error = ret;
+ return;
+ }
+
+ /* DH-HMAC-CHAP Step 2: receive challenge */
+ dev_dbg(ctrl->device, "%s: qid %d receive challenge\n",
+ __func__, chap->qid);
+
+ memset(chap->buf, 0, chap->buf_size);
+ ret = nvme_auth_receive(ctrl, chap->qid, chap->buf, chap->buf_size);
+ if (ret) {
+ dev_warn(ctrl->device,
+ "qid %d failed to receive challenge, %s %d\n",
+ chap->qid, ret < 0 ? "error" : "nvme status", ret);
+ chap->error = ret;
+ return;
+ }
+ ret = nvme_auth_receive_validate(ctrl, chap->qid, chap->buf, chap->transaction,
+ NVME_AUTH_DHCHAP_MESSAGE_CHALLENGE);
+ if (ret) {
+ chap->status = ret;
+ chap->error = NVME_SC_AUTH_REQUIRED;
+ return;
+ }
+
+ ret = nvme_auth_process_dhchap_challenge(ctrl, chap);
+ if (ret) {
+ /* Invalid challenge parameters */
+ goto fail2;
+ }
+
+ if (chap->ctrl_key_len) {
+ dev_dbg(ctrl->device,
+ "%s: qid %d DH exponential\n",
+ __func__, chap->qid);
+ ret = nvme_auth_dhchap_exponential(ctrl, chap);
+ if (ret)
+ goto fail2;
+ }
+
+ dev_dbg(ctrl->device, "%s: qid %d host response\n",
+ __func__, chap->qid);
+ ret = nvme_auth_dhchap_host_response(ctrl, chap);
+ if (ret)
+ goto fail2;
+
+ /* DH-HMAC-CHAP Step 3: send reply */
+ dev_dbg(ctrl->device, "%s: qid %d send reply\n",
+ __func__, chap->qid);
+ ret = nvme_auth_set_dhchap_reply_data(ctrl, chap);
+ if (ret < 0)
+ goto fail2;
+
+ tl = ret;
+ ret = nvme_auth_send(ctrl, chap->qid, chap->buf, tl);
+ if (ret)
+ goto fail2;
+
+ /* DH-HMAC-CHAP Step 4: receive success1 */
+ dev_dbg(ctrl->device, "%s: qid %d receive success1\n",
+ __func__, chap->qid);
+
+ memset(chap->buf, 0, chap->buf_size);
+ ret = nvme_auth_receive(ctrl, chap->qid, chap->buf, chap->buf_size);
+ if (ret) {
+ dev_warn(ctrl->device,
+ "qid %d failed to receive success1, %s %d\n",
+ chap->qid, ret < 0 ? "error" : "nvme status", ret);
+ chap->error = ret;
+ return;
+ }
+ ret = nvme_auth_receive_validate(ctrl, chap->qid,
+ chap->buf, chap->transaction,
+ NVME_AUTH_DHCHAP_MESSAGE_SUCCESS1);
+ if (ret) {
+ chap->status = ret;
+ chap->error = NVME_SC_AUTH_REQUIRED;
+ return;
+ }
+
+ if (ctrl->opts->dhchap_bidi) {
+ dev_dbg(ctrl->device,
+ "%s: qid %d controller response\n",
+ __func__, chap->qid);
+ ret = nvme_auth_dhchap_ctrl_response(ctrl, chap);
+ if (ret)
+ goto fail2;
+ }
+
+ ret = nvme_auth_process_dhchap_success1(ctrl, chap);
+ if (ret < 0) {
+ /* Controller authentication failed */
+ goto fail2;
+ }
+
+ /* DH-HMAC-CHAP Step 5: send success2 */
+ dev_dbg(ctrl->device, "%s: qid %d send success2\n",
+ __func__, chap->qid);
+ tl = nvme_auth_set_dhchap_success2_data(ctrl, chap);
+ ret = nvme_auth_send(ctrl, chap->qid, chap->buf, tl);
+ if (!ret) {
+ chap->error = 0;
+ return;
+ }
+
+fail2:
+ dev_dbg(ctrl->device, "%s: qid %d send failure2, status %x\n",
+ __func__, chap->qid, chap->status);
+ tl = nvme_auth_set_dhchap_failure2_data(ctrl, chap);
+ ret = nvme_auth_send(ctrl, chap->qid, chap->buf, tl);
+ if (!ret)
+ ret = -EPROTO;
+ chap->error = ret;
+}
+
+int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid)
+{
+ struct nvme_dhchap_queue_context *chap;
+
+ if (!ctrl->dhchap_key || !ctrl->dhchap_key_len) {
+ dev_warn(ctrl->device, "qid %d: no key\n", qid);
+ return -ENOKEY;
+ }
+
+ mutex_lock(&ctrl->dhchap_auth_mutex);
+ /* Check if the context is already queued */
+ list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
+ if (chap->qid == qid) {
+ mutex_unlock(&ctrl->dhchap_auth_mutex);
+ queue_work(nvme_wq, &chap->auth_work);
+ return 0;
+ }
+ }
+ chap = kzalloc(sizeof(*chap), GFP_KERNEL);
+ if (!chap) {
+ mutex_unlock(&ctrl->dhchap_auth_mutex);
+ return -ENOMEM;
+ }
+ chap->qid = qid;
+ chap->ctrl = ctrl;
+
+ /*
+ * Allocate a large enough buffer for the entire negotiation:
+ * 4k should be enough to ffdhe8192.
+ */
+ chap->buf_size = 4096;
+ chap->buf = kzalloc(chap->buf_size, GFP_KERNEL);
+ if (!chap->buf) {
+ mutex_unlock(&ctrl->dhchap_auth_mutex);
+ kfree(chap);
+ return -ENOMEM;
+ }
+
+ INIT_WORK(&chap->auth_work, __nvme_auth_work);
+ list_add(&chap->entry, &ctrl->dhchap_auth_list);
+ mutex_unlock(&ctrl->dhchap_auth_mutex);
+ queue_work(nvme_wq, &chap->auth_work);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_negotiate);
+
+int nvme_auth_wait(struct nvme_ctrl *ctrl, int qid)
+{
+ struct nvme_dhchap_queue_context *chap;
+ int ret;
+
+ mutex_lock(&ctrl->dhchap_auth_mutex);
+ list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
+ if (chap->qid != qid)
+ continue;
+ mutex_unlock(&ctrl->dhchap_auth_mutex);
+ flush_work(&chap->auth_work);
+ ret = chap->error;
+ nvme_auth_reset(chap);
+ return ret;
+ }
+ mutex_unlock(&ctrl->dhchap_auth_mutex);
+ return -ENXIO;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_wait);
+
+/* Assumes that the controller is in state RESETTING */
+static void nvme_dhchap_auth_work(struct work_struct *work)
+{
+ struct nvme_ctrl *ctrl =
+ container_of(work, struct nvme_ctrl, dhchap_auth_work);
+ int ret, q;
+
+ nvme_stop_queues(ctrl);
+ /* Authenticate admin queue first */
+ ret = nvme_auth_negotiate(ctrl, NVME_QID_ANY);
+ if (ret) {
+ dev_warn(ctrl->device,
+ "qid 0: error %d setting up authentication\n", ret);
+ goto out;
+ }
+ ret = nvme_auth_wait(ctrl, NVME_QID_ANY);
+ if (ret) {
+ dev_warn(ctrl->device,
+ "qid 0: authentication failed\n");
+ goto out;
+ }
+ dev_info(ctrl->device, "qid 0: authenticated\n");
+
+ for (q = 1; q < ctrl->queue_count; q++) {
+ ret = nvme_auth_negotiate(ctrl, q);
+ if (ret) {
+ dev_warn(ctrl->device,
+ "qid %d: error %d setting up authentication\n",
+ q, ret);
+ goto out;
+ }
+ }
+out:
+ /*
+ * Failure is a soft-state; credentials remain valid until
+ * the controller terminates the connection.
+ */
+ if (nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE))
+ nvme_start_queues(ctrl);
+}
+
+void nvme_auth_init_ctrl(struct nvme_ctrl *ctrl)
+{
+ INIT_LIST_HEAD(&ctrl->dhchap_auth_list);
+ INIT_WORK(&ctrl->dhchap_auth_work, nvme_dhchap_auth_work);
+ mutex_init(&ctrl->dhchap_auth_mutex);
+ nvme_auth_generate_key(ctrl);
+}
+EXPORT_SYMBOL_GPL(nvme_auth_init_ctrl);
+
+void nvme_auth_stop(struct nvme_ctrl *ctrl)
+{
+ struct nvme_dhchap_queue_context *chap = NULL, *tmp;
+
+ cancel_work_sync(&ctrl->dhchap_auth_work);
+ mutex_lock(&ctrl->dhchap_auth_mutex);
+ list_for_each_entry_safe(chap, tmp, &ctrl->dhchap_auth_list, entry)
+ cancel_work_sync(&chap->auth_work);
+ mutex_unlock(&ctrl->dhchap_auth_mutex);
+}
+EXPORT_SYMBOL_GPL(nvme_auth_stop);
+
+void nvme_auth_free(struct nvme_ctrl *ctrl)
+{
+ struct nvme_dhchap_queue_context *chap = NULL, *tmp;
+
+ mutex_lock(&ctrl->dhchap_auth_mutex);
+ list_for_each_entry_safe(chap, tmp, &ctrl->dhchap_auth_list, entry) {
+ list_del_init(&chap->entry);
+ flush_work(&chap->auth_work);
+ __nvme_auth_free(chap);
+ }
+ mutex_unlock(&ctrl->dhchap_auth_mutex);
+ kfree(ctrl->dhchap_key);
+ ctrl->dhchap_key = NULL;
+ ctrl->dhchap_key_len = 0;
+}
+EXPORT_SYMBOL_GPL(nvme_auth_free);
diff --git a/drivers/nvme/host/auth.h b/drivers/nvme/host/auth.h
new file mode 100644
index 000000000000..cf1255f9db6d
--- /dev/null
+++ b/drivers/nvme/host/auth.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2021 Hannes Reinecke, SUSE Software Solutions
+ */
+
+#ifndef _NVME_AUTH_H
+#define _NVME_AUTH_H
+
+#include <crypto/kpp.h>
+
+const char *nvme_auth_dhgroup_name(int dhgroup_id);
+int nvme_auth_dhgroup_pubkey_size(int dhgroup_id);
+int nvme_auth_dhgroup_privkey_size(int dhgroup_id);
+const char *nvme_auth_dhgroup_kpp(int dhgroup_id);
+int nvme_auth_dhgroup_id(const char *dhgroup_name);
+
+const char *nvme_auth_hmac_name(int hmac_id);
+const char *nvme_auth_digest_name(int hmac_id);
+int nvme_auth_hmac_id(const char *hmac_name);
+
+unsigned char *nvme_auth_extract_secret(unsigned char *dhchap_secret,
+ size_t *dhchap_key_len);
+u8 *nvme_auth_transform_key(u8 *key, size_t key_len, u8 key_hash, char *nqn);
+
+#endif /* _NVME_AUTH_H */
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 7efb31b87f37..f669b054790b 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -24,6 +24,7 @@

#include "nvme.h"
#include "fabrics.h"
+#include "auth.h"

#define CREATE_TRACE_POINTS
#include "trace.h"
@@ -322,6 +323,7 @@ enum nvme_disposition {
COMPLETE,
RETRY,
FAILOVER,
+ AUTHENTICATE,
};

static inline enum nvme_disposition nvme_decide_disposition(struct request *req)
@@ -329,6 +331,9 @@ static inline enum nvme_disposition nvme_decide_disposition(struct request *req)
if (likely(nvme_req(req)->status == 0))
return COMPLETE;

+ if ((nvme_req(req)->status & 0x7ff) == NVME_SC_AUTH_REQUIRED)
+ return AUTHENTICATE;
+
if (blk_noretry_request(req) ||
(nvme_req(req)->status & NVME_SC_DNR) ||
nvme_req(req)->retries >= nvme_max_retries)
@@ -361,11 +366,13 @@ static inline void nvme_end_req(struct request *req)

void nvme_complete_rq(struct request *req)
{
+ struct nvme_ctrl *ctrl = nvme_req(req)->ctrl;
+
trace_nvme_complete_rq(req);
nvme_cleanup_cmd(req);

- if (nvme_req(req)->ctrl->kas)
- nvme_req(req)->ctrl->comp_seen = true;
+ if (ctrl->kas)
+ ctrl->comp_seen = true;

switch (nvme_decide_disposition(req)) {
case COMPLETE:
@@ -377,6 +384,15 @@ void nvme_complete_rq(struct request *req)
case FAILOVER:
nvme_failover_req(req);
return;
+ case AUTHENTICATE:
+#ifdef CONFIG_NVME_AUTH
+ if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
+ queue_work(nvme_wq, &ctrl->dhchap_auth_work);
+ nvme_retry_req(req);
+#else
+ nvme_end_req(req);
+#endif
+ return;
}
}
EXPORT_SYMBOL_GPL(nvme_complete_rq);
@@ -707,7 +723,9 @@ bool __nvme_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
switch (ctrl->state) {
case NVME_CTRL_CONNECTING:
if (blk_rq_is_passthrough(rq) && nvme_is_fabrics(req->cmd) &&
- req->cmd->fabrics.fctype == nvme_fabrics_type_connect)
+ (req->cmd->fabrics.fctype == nvme_fabrics_type_connect ||
+ req->cmd->fabrics.fctype == nvme_fabrics_type_auth_send ||
+ req->cmd->fabrics.fctype == nvme_fabrics_type_auth_receive))
return true;
break;
default:
@@ -3458,6 +3476,51 @@ static ssize_t nvme_ctrl_fast_io_fail_tmo_store(struct device *dev,
static DEVICE_ATTR(fast_io_fail_tmo, S_IRUGO | S_IWUSR,
nvme_ctrl_fast_io_fail_tmo_show, nvme_ctrl_fast_io_fail_tmo_store);

+#ifdef CONFIG_NVME_AUTH
+static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+ struct nvmf_ctrl_options *opts = ctrl->opts;
+
+ if (!opts->dhchap_secret)
+ return sysfs_emit(buf, "none\n");
+ return sysfs_emit(buf, "%s\n", opts->dhchap_secret);
+}
+
+static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+ struct nvmf_ctrl_options *opts = ctrl->opts;
+ char *dhchap_secret;
+
+ if (!ctrl->opts->dhchap_secret)
+ return -EINVAL;
+ if (count < 7)
+ return -EINVAL;
+ if (memcmp(buf, "DHHC-1:", 7))
+ return -EINVAL;
+
+ dhchap_secret = kzalloc(count + 1, GFP_KERNEL);
+ if (!dhchap_secret)
+ return -ENOMEM;
+ memcpy(dhchap_secret, buf, count);
+ if (strcmp(dhchap_secret, opts->dhchap_secret)) {
+ kfree(opts->dhchap_secret);
+ opts->dhchap_secret = dhchap_secret;
+ /* Key has changed; reset authentication data */
+ nvme_auth_free(ctrl);
+ nvme_auth_generate_key(ctrl);
+ }
+ if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
+ queue_work(nvme_wq, &ctrl->dhchap_auth_work);
+ return count;
+}
+DEVICE_ATTR(dhchap_secret, S_IRUGO | S_IWUSR,
+ nvme_ctrl_dhchap_secret_show, nvme_ctrl_dhchap_secret_store);
+#endif
+
static struct attribute *nvme_dev_attrs[] = {
&dev_attr_reset_controller.attr,
&dev_attr_rescan_controller.attr,
@@ -3479,6 +3542,9 @@ static struct attribute *nvme_dev_attrs[] = {
&dev_attr_reconnect_delay.attr,
&dev_attr_fast_io_fail_tmo.attr,
&dev_attr_kato.attr,
+#ifdef CONFIG_NVME_AUTH
+ &dev_attr_dhchap_secret.attr,
+#endif
NULL
};

@@ -3502,6 +3568,10 @@ static umode_t nvme_dev_attrs_are_visible(struct kobject *kobj,
return 0;
if (a == &dev_attr_fast_io_fail_tmo.attr && !ctrl->opts)
return 0;
+#ifdef CONFIG_NVME_AUTH
+ if (a == &dev_attr_dhchap_secret.attr && !ctrl->opts)
+ return 0;
+#endif

return a->mode;
}
@@ -4312,6 +4382,7 @@ EXPORT_SYMBOL_GPL(nvme_complete_async_event);
void nvme_stop_ctrl(struct nvme_ctrl *ctrl)
{
nvme_mpath_stop(ctrl);
+ nvme_auth_stop(ctrl);
nvme_stop_keep_alive(ctrl);
nvme_stop_failfast_work(ctrl);
flush_work(&ctrl->async_event_work);
@@ -4366,6 +4437,7 @@ static void nvme_free_ctrl(struct device *dev)

nvme_free_cels(ctrl);
nvme_mpath_uninit(ctrl);
+ nvme_auth_free(ctrl);
__free_page(ctrl->discard_page);

if (subsys) {
@@ -4456,6 +4528,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,

nvme_fault_inject_init(&ctrl->fault_inject, dev_name(ctrl->device));
nvme_mpath_init_ctrl(ctrl);
+ nvme_auth_init_ctrl(ctrl);

return 0;
out_free_name:
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 9a8eade7cd23..ee6058c24743 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -370,6 +370,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
union nvme_result res;
struct nvmf_connect_data *data;
int ret;
+ u32 result;

cmd.connect.opcode = nvme_fabrics_command;
cmd.connect.fctype = nvme_fabrics_type_connect;
@@ -402,8 +403,25 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
goto out_free_data;
}

- ctrl->cntlid = le16_to_cpu(res.u16);
-
+ result = le32_to_cpu(res.u32);
+ ctrl->cntlid = result & 0xFFFF;
+ if ((result >> 16) & 2) {
+ /* Authentication required */
+ ret = nvme_auth_negotiate(ctrl, NVME_QID_ANY);
+ if (ret) {
+ dev_warn(ctrl->device,
+ "qid 0: failed to setup authentication\n");
+ ret = NVME_SC_AUTH_REQUIRED;
+ goto out_free_data;
+ }
+ ret = nvme_auth_wait(ctrl, NVME_QID_ANY);
+ if (ret)
+ dev_warn(ctrl->device,
+ "qid 0: authentication failed\n");
+ else
+ dev_info(ctrl->device,
+ "qid 0: authenticated\n");
+ }
out_free_data:
kfree(data);
return ret;
@@ -436,6 +454,7 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
struct nvmf_connect_data *data;
union nvme_result res;
int ret;
+ u32 result;

cmd.connect.opcode = nvme_fabrics_command;
cmd.connect.fctype = nvme_fabrics_type_connect;
@@ -461,6 +480,24 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
&cmd, data);
}
+ result = le32_to_cpu(res.u32);
+ if ((result >> 16) & 2) {
+ /* Authentication required */
+ ret = nvme_auth_negotiate(ctrl, qid);
+ if (ret) {
+ dev_warn(ctrl->device,
+ "qid %d: failed to setup authentication\n", qid);
+ ret = NVME_SC_AUTH_REQUIRED;
+ } else {
+ ret = nvme_auth_wait(ctrl, qid);
+ if (ret)
+ dev_warn(ctrl->device,
+ "qid %u: authentication failed\n", qid);
+ else
+ dev_info(ctrl->device,
+ "qid %u: authenticated\n", qid);
+ }
+ }
kfree(data);
return ret;
}
@@ -552,6 +589,8 @@ static const match_table_t opt_tokens = {
{ NVMF_OPT_NR_POLL_QUEUES, "nr_poll_queues=%d" },
{ NVMF_OPT_TOS, "tos=%d" },
{ NVMF_OPT_FAIL_FAST_TMO, "fast_io_fail_tmo=%d" },
+ { NVMF_OPT_DHCHAP_SECRET, "dhchap_secret=%s" },
+ { NVMF_OPT_DHCHAP_BIDI, "dhchap_bidi" },
{ NVMF_OPT_ERR, NULL }
};

@@ -827,6 +866,23 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
}
opts->tos = token;
break;
+ case NVMF_OPT_DHCHAP_SECRET:
+ p = match_strdup(args);
+ if (!p) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ if (strlen(p) < 11 || strncmp(p, "DHHC-1:", 7)) {
+ pr_err("Invalid DH-CHAP secret %s\n", p);
+ ret = -EINVAL;
+ goto out;
+ }
+ kfree(opts->dhchap_secret);
+ opts->dhchap_secret = p;
+ break;
+ case NVMF_OPT_DHCHAP_BIDI:
+ opts->dhchap_bidi = true;
+ break;
default:
pr_warn("unknown parameter or missing value '%s' in ctrl creation request\n",
p);
@@ -945,6 +1001,7 @@ void nvmf_free_options(struct nvmf_ctrl_options *opts)
kfree(opts->subsysnqn);
kfree(opts->host_traddr);
kfree(opts->host_iface);
+ kfree(opts->dhchap_secret);
kfree(opts);
}
EXPORT_SYMBOL_GPL(nvmf_free_options);
@@ -954,7 +1011,10 @@ EXPORT_SYMBOL_GPL(nvmf_free_options);
NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \
NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\
NVMF_OPT_DISABLE_SQFLOW |\
- NVMF_OPT_FAIL_FAST_TMO)
+ NVMF_OPT_CTRL_LOSS_TMO |\
+ NVMF_OPT_FAIL_FAST_TMO |\
+ NVMF_OPT_DHCHAP_SECRET |\
+ NVMF_OPT_DHCHAP_BIDI)

static struct nvme_ctrl *
nvmf_create_ctrl(struct device *dev, const char *buf)
@@ -1171,7 +1231,14 @@ static void __exit nvmf_exit(void)
BUILD_BUG_ON(sizeof(struct nvmf_connect_command) != 64);
BUILD_BUG_ON(sizeof(struct nvmf_property_get_command) != 64);
BUILD_BUG_ON(sizeof(struct nvmf_property_set_command) != 64);
+ BUILD_BUG_ON(sizeof(struct nvmf_auth_send_command) != 64);
+ BUILD_BUG_ON(sizeof(struct nvmf_auth_receive_command) != 64);
BUILD_BUG_ON(sizeof(struct nvmf_connect_data) != 1024);
+ BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_negotiate_data) != 8);
+ BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_challenge_data) != 16);
+ BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_reply_data) != 16);
+ BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_success1_data) != 16);
+ BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_success2_data) != 16);
}

MODULE_LICENSE("GPL v2");
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index a146cb903869..27df1aac5736 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -67,6 +67,8 @@ enum {
NVMF_OPT_TOS = 1 << 19,
NVMF_OPT_FAIL_FAST_TMO = 1 << 20,
NVMF_OPT_HOST_IFACE = 1 << 21,
+ NVMF_OPT_DHCHAP_SECRET = 1 << 22,
+ NVMF_OPT_DHCHAP_BIDI = 1 << 23,
};

/**
@@ -96,6 +98,8 @@ enum {
* @max_reconnects: maximum number of allowed reconnect attempts before removing
* the controller, (-1) means reconnect forever, zero means remove
* immediately;
+ * @dhchap_secret: DH-HMAC-CHAP secret
+ * @dhchap_bidi: enable DH-HMAC-CHAP bi-directional authentication
* @disable_sqflow: disable controller sq flow control
* @hdr_digest: generate/verify header digest (TCP)
* @data_digest: generate/verify data digest (TCP)
@@ -120,6 +124,8 @@ struct nvmf_ctrl_options {
unsigned int kato;
struct nvmf_host *host;
int max_reconnects;
+ char *dhchap_secret;
+ bool dhchap_bidi;
bool disable_sqflow;
bool hdr_digest;
bool data_digest;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9871c0c9374c..b0dcb7d79b9e 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -318,6 +318,15 @@ struct nvme_ctrl {
struct work_struct ana_work;
#endif

+#ifdef CONFIG_NVME_AUTH
+ struct work_struct dhchap_auth_work;
+ struct list_head dhchap_auth_list;
+ struct mutex dhchap_auth_mutex;
+ unsigned char *dhchap_key;
+ size_t dhchap_key_len;
+ u16 transaction;
+#endif
+
/* Power saving configuration */
u64 ps_max_latency_us;
bool apst_enabled;
@@ -885,6 +894,27 @@ static inline bool nvme_ctrl_sgl_supported(struct nvme_ctrl *ctrl)
return ctrl->sgls & ((1 << 0) | (1 << 1));
}

+#ifdef CONFIG_NVME_AUTH
+void nvme_auth_init_ctrl(struct nvme_ctrl *ctrl);
+void nvme_auth_stop(struct nvme_ctrl *ctrl);
+int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid);
+int nvme_auth_wait(struct nvme_ctrl *ctrl, int qid);
+void nvme_auth_free(struct nvme_ctrl *ctrl);
+int nvme_auth_generate_key(struct nvme_ctrl *ctrl);
+#else
+static inline void nvme_auth_init_ctrl(struct nvme_ctrl *ctrl) {};
+static inline void nvme_auth_stop(struct nvme_ctrl *ctrl) {};
+static inline int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid)
+{
+ return -EPROTONOSUPPORT;
+}
+static inline int nvme_auth_wait(struct nvme_ctrl *ctrl, int qid)
+{
+ return NVME_SC_AUTH_REQUIRED;
+}
+static inline void nvme_auth_free(struct nvme_ctrl *ctrl) {};
+#endif
+
u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
u8 opcode);
int nvme_execute_passthru_rq(struct request *rq);
diff --git a/drivers/nvme/host/trace.c b/drivers/nvme/host/trace.c
index 2a89c5aa0790..1c36fcedea20 100644
--- a/drivers/nvme/host/trace.c
+++ b/drivers/nvme/host/trace.c
@@ -287,6 +287,34 @@ static const char *nvme_trace_fabrics_property_get(struct trace_seq *p, u8 *spc)
return ret;
}

+static const char *nvme_trace_fabrics_auth_send(struct trace_seq *p, u8 *spc)
+{
+ const char *ret = trace_seq_buffer_ptr(p);
+ u8 spsp0 = spc[1];
+ u8 spsp1 = spc[2];
+ u8 secp = spc[3];
+ u32 tl = get_unaligned_le32(spc + 4);
+
+ trace_seq_printf(p, "spsp0=%02x, spsp1=%02x, secp=%02x, tl=%u",
+ spsp0, spsp1, secp, tl);
+ trace_seq_putc(p, 0);
+ return ret;
+}
+
+static const char *nvme_trace_fabrics_auth_receive(struct trace_seq *p, u8 *spc)
+{
+ const char *ret = trace_seq_buffer_ptr(p);
+ u8 spsp0 = spc[1];
+ u8 spsp1 = spc[2];
+ u8 secp = spc[3];
+ u32 al = get_unaligned_le32(spc + 4);
+
+ trace_seq_printf(p, "spsp0=%02x, spsp1=%02x, secp=%02x, al=%u",
+ spsp0, spsp1, secp, al);
+ trace_seq_putc(p, 0);
+ return ret;
+}
+
static const char *nvme_trace_fabrics_common(struct trace_seq *p, u8 *spc)
{
const char *ret = trace_seq_buffer_ptr(p);
@@ -306,6 +334,10 @@ const char *nvme_trace_parse_fabrics_cmd(struct trace_seq *p,
return nvme_trace_fabrics_connect(p, spc);
case nvme_fabrics_type_property_get:
return nvme_trace_fabrics_property_get(p, spc);
+ case nvme_fabrics_type_auth_send:
+ return nvme_trace_fabrics_auth_send(p, spc);
+ case nvme_fabrics_type_auth_receive:
+ return nvme_trace_fabrics_auth_receive(p, spc);
default:
return nvme_trace_fabrics_common(p, spc);
}
--
2.29.2

2021-09-10 06:45:04

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 05/12] nvme: add definitions for NVMe In-Band authentication

Signed-off-by: Hannes Reinecke <[email protected]>
---
include/linux/nvme.h | 186 ++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 185 insertions(+), 1 deletion(-)

diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index b7c4c4130b65..e2142e3246eb 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -19,6 +19,7 @@
#define NVMF_TRSVCID_SIZE 32
#define NVMF_TRADDR_SIZE 256
#define NVMF_TSAS_SIZE 256
+#define NVMF_AUTH_HASH_LEN 64

#define NVME_DISC_SUBSYS_NAME "nqn.2014-08.org.nvmexpress.discovery"

@@ -1263,6 +1264,8 @@ enum nvmf_capsule_command {
nvme_fabrics_type_property_set = 0x00,
nvme_fabrics_type_connect = 0x01,
nvme_fabrics_type_property_get = 0x04,
+ nvme_fabrics_type_auth_send = 0x05,
+ nvme_fabrics_type_auth_receive = 0x06,
};

#define nvme_fabrics_type_name(type) { type, #type }
@@ -1270,7 +1273,9 @@ enum nvmf_capsule_command {
__print_symbolic(type, \
nvme_fabrics_type_name(nvme_fabrics_type_property_set), \
nvme_fabrics_type_name(nvme_fabrics_type_connect), \
- nvme_fabrics_type_name(nvme_fabrics_type_property_get))
+ nvme_fabrics_type_name(nvme_fabrics_type_property_get), \
+ nvme_fabrics_type_name(nvme_fabrics_type_auth_send), \
+ nvme_fabrics_type_name(nvme_fabrics_type_auth_receive))

/*
* If not fabrics command, fctype will be ignored.
@@ -1393,6 +1398,183 @@ struct nvmf_property_get_command {
__u8 resv4[16];
};

+struct nvmf_auth_send_command {
+ __u8 opcode;
+ __u8 resv1;
+ __u16 command_id;
+ __u8 fctype;
+ __u8 resv2[19];
+ union nvme_data_ptr dptr;
+ __u8 resv3;
+ __u8 spsp0;
+ __u8 spsp1;
+ __u8 secp;
+ __le32 tl;
+ __u8 resv4[16];
+};
+
+struct nvmf_auth_receive_command {
+ __u8 opcode;
+ __u8 resv1;
+ __u16 command_id;
+ __u8 fctype;
+ __u8 resv2[19];
+ union nvme_data_ptr dptr;
+ __u8 resv3;
+ __u8 spsp0;
+ __u8 spsp1;
+ __u8 secp;
+ __le32 al;
+ __u8 resv4[16];
+};
+
+/* Value for secp */
+enum {
+ NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER = 0xe9,
+};
+
+/* Defined value for auth_type */
+enum {
+ NVME_AUTH_COMMON_MESSAGES = 0x00,
+ NVME_AUTH_DHCHAP_MESSAGES = 0x01,
+};
+
+/* Defined messages for auth_id */
+enum {
+ NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE = 0x00,
+ NVME_AUTH_DHCHAP_MESSAGE_CHALLENGE = 0x01,
+ NVME_AUTH_DHCHAP_MESSAGE_REPLY = 0x02,
+ NVME_AUTH_DHCHAP_MESSAGE_SUCCESS1 = 0x03,
+ NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2 = 0x04,
+ NVME_AUTH_DHCHAP_MESSAGE_FAILURE2 = 0xf0,
+ NVME_AUTH_DHCHAP_MESSAGE_FAILURE1 = 0xf1,
+};
+
+struct nvmf_auth_dhchap_protocol_descriptor {
+ __u8 authid;
+ __u8 rsvd;
+ __u8 halen;
+ __u8 dhlen;
+ __u8 idlist[60];
+};
+
+enum {
+ NVME_AUTH_DHCHAP_AUTH_ID = 0x01,
+};
+
+/* Defined hash functions for DH-HMAC-CHAP authentication */
+enum {
+ NVME_AUTH_DHCHAP_SHA256 = 0x01,
+ NVME_AUTH_DHCHAP_SHA384 = 0x02,
+ NVME_AUTH_DHCHAP_SHA512 = 0x03,
+};
+
+/* Defined Diffie-Hellman group identifiers for DH-HMAC-CHAP authentication */
+enum {
+ NVME_AUTH_DHCHAP_DHGROUP_NULL = 0x00,
+ NVME_AUTH_DHCHAP_DHGROUP_2048 = 0x01,
+ NVME_AUTH_DHCHAP_DHGROUP_3072 = 0x02,
+ NVME_AUTH_DHCHAP_DHGROUP_4096 = 0x03,
+ NVME_AUTH_DHCHAP_DHGROUP_6144 = 0x04,
+ NVME_AUTH_DHCHAP_DHGROUP_8192 = 0x05,
+};
+
+union nvmf_auth_protocol {
+ struct nvmf_auth_dhchap_protocol_descriptor dhchap;
+};
+
+struct nvmf_auth_dhchap_negotiate_data {
+ __u8 auth_type;
+ __u8 auth_id;
+ __le16 rsvd;
+ __le16 t_id;
+ __u8 sc_c;
+ __u8 napd;
+ union nvmf_auth_protocol auth_protocol[];
+};
+
+struct nvmf_auth_dhchap_challenge_data {
+ __u8 auth_type;
+ __u8 auth_id;
+ __u16 rsvd1;
+ __le16 t_id;
+ __u8 hl;
+ __u8 rsvd2;
+ __u8 hashid;
+ __u8 dhgid;
+ __le16 dhvlen;
+ __le32 seqnum;
+ /* 'hl' bytes of challenge value */
+ __u8 cval[];
+ /* followed by 'dhvlen' bytes of DH value */
+};
+
+struct nvmf_auth_dhchap_reply_data {
+ __u8 auth_type;
+ __u8 auth_id;
+ __le16 rsvd1;
+ __le16 t_id;
+ __u8 hl;
+ __u8 rsvd2;
+ __u8 cvalid;
+ __u8 rsvd3;
+ __le16 dhvlen;
+ __le32 seqnum;
+ /* 'hl' bytes of response data */
+ __u8 rval[];
+ /* followed by 'hl' bytes of Challenge value */
+ /* followed by 'dhvlen' bytes of DH value */
+};
+
+enum {
+ NVME_AUTH_DHCHAP_RESPONSE_VALID = (1 << 0),
+};
+
+struct nvmf_auth_dhchap_success1_data {
+ __u8 auth_type;
+ __u8 auth_id;
+ __le16 rsvd1;
+ __le16 t_id;
+ __u8 hl;
+ __u8 rsvd2;
+ __u8 rvalid;
+ __u8 rsvd3[7];
+ /* 'hl' bytes of response value if 'rvalid' is set */
+ __u8 rval[];
+};
+
+struct nvmf_auth_dhchap_success2_data {
+ __u8 auth_type;
+ __u8 auth_id;
+ __le16 rsvd1;
+ __le16 t_id;
+ __u8 rsvd2[10];
+};
+
+struct nvmf_auth_dhchap_failure_data {
+ __u8 auth_type;
+ __u8 auth_id;
+ __le16 rsvd1;
+ __le16 t_id;
+ __u8 rescode;
+ __u8 rescode_exp;
+};
+
+enum {
+ NVME_AUTH_DHCHAP_FAILURE_REASON_FAILED = 0x01,
+};
+
+enum {
+ NVME_AUTH_DHCHAP_FAILURE_FAILED = 0x01,
+ NVME_AUTH_DHCHAP_FAILURE_NOT_USABLE = 0x02,
+ NVME_AUTH_DHCHAP_FAILURE_CONCAT_MISMATCH = 0x03,
+ NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE = 0x04,
+ NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE = 0x05,
+ NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD = 0x06,
+ NVME_AUTH_DHCHAP_FAILURE_INCORRECT_MESSAGE = 0x07,
+};
+
+
struct nvme_dbbuf {
__u8 opcode;
__u8 flags;
@@ -1436,6 +1618,8 @@ struct nvme_command {
struct nvmf_connect_command connect;
struct nvmf_property_set_command prop_set;
struct nvmf_property_get_command prop_get;
+ struct nvmf_auth_send_command auth_send;
+ struct nvmf_auth_receive_command auth_receive;
struct nvme_dbbuf dbbuf;
struct nvme_directive_cmd directive;
};
--
2.29.2

2021-09-13 09:16:47

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCHv3 00/12] nvme: In-band authentication support


> Hi all,
>
> recent updates to the NVMe spec have added definitions for in-band
> authentication, and seeing that it provides some real benefit
> especially for NVMe-TCP here's an attempt to implement it.
>
> Tricky bit here is that the specification orients itself on TLS 1.3,
> but supports only the FFDHE groups. Which of course the kernel doesn't
> support. I've been able to come up with a patch for this, but as this
> is my first attempt to fix anything in the crypto area I would invite
> people more familiar with these matters to have a look.
>
> Also note that this is just for in-band authentication. Secure
> concatenation (ie starting TLS with the negotiated parameters) is not
> implemented; one would need to update the kernel TLS implementation
> for this, which at this time is beyond scope.
>
> As usual, comments and reviews are welcome.

Still no nvme-cli nor nvmetcli :(

2021-09-13 09:44:40

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCHv3 00/12] nvme: In-band authentication support

On 9/13/21 11:16 AM, Sagi Grimberg wrote:
>
>> Hi all,
>>
>> recent updates to the NVMe spec have added definitions for in-band
>> authentication, and seeing that it provides some real benefit
>> especially for NVMe-TCP here's an attempt to implement it.
>>
>> Tricky bit here is that the specification orients itself on TLS 1.3,
>> but supports only the FFDHE groups. Which of course the kernel doesn't
>> support. I've been able to come up with a patch for this, but as this
>> is my first attempt to fix anything in the crypto area I would invite
>> people more familiar with these matters to have a look.
>>
>> Also note that this is just for in-band authentication. Secure
>> concatenation (ie starting TLS with the negotiated parameters) is not
>> implemented; one would need to update the kernel TLS implementation
>> for this, which at this time is beyond scope.
>>
>> As usual, comments and reviews are welcome.
>
> Still no nvme-cli nor nvmetcli :(

Just send it (for libnvme and nvme-cli). Patch for nvmetcli to follow.

Cheers,

Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

2021-09-13 13:16:45

by Sagi Grimberg

[permalink] [raw]

2021-09-13 13:16:45

by Sagi Grimberg

[permalink] [raw]

2021-09-13 13:59:15

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH 07/12] nvme: Implement In-Band authentication



On 9/10/21 9:43 AM, Hannes Reinecke wrote:
> Implement NVMe-oF In-Band authentication according to NVMe TPAR 8006.
> This patch adds two new fabric options 'dhchap_secret' to specify the
> pre-shared key (in ASCII respresentation according to NVMe 2.0 section
> 8.13.5.8 'Secret representation') and 'dhchap_bidi' to request bi-directional
> authentication of both the host and the controller.
> Re-authentication can be triggered by writing the PSK into the new
> controller sysfs attribute 'dhchap_secret'.
>
> Signed-off-by: Hannes Reinecke <[email protected]>
> ---
> drivers/nvme/host/Kconfig | 12 +
> drivers/nvme/host/Makefile | 1 +
> drivers/nvme/host/auth.c | 1285 +++++++++++++++++++++++++++++++++++
> drivers/nvme/host/auth.h | 25 +
> drivers/nvme/host/core.c | 79 ++-
> drivers/nvme/host/fabrics.c | 73 +-
> drivers/nvme/host/fabrics.h | 6 +
> drivers/nvme/host/nvme.h | 30 +
> drivers/nvme/host/trace.c | 32 +
> 9 files changed, 1537 insertions(+), 6 deletions(-)
> create mode 100644 drivers/nvme/host/auth.c
> create mode 100644 drivers/nvme/host/auth.h
>
> diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig
> index dc0450ca23a3..97e8412dc42d 100644
> --- a/drivers/nvme/host/Kconfig
> +++ b/drivers/nvme/host/Kconfig
> @@ -83,3 +83,15 @@ config NVME_TCP
> from https://github.com/linux-nvme/nvme-cli.
>
> If unsure, say N.
> +
> +config NVME_AUTH
> + bool "NVM Express over Fabrics In-Band Authentication"
> + depends on NVME_CORE
> + select CRYPTO_HMAC
> + select CRYPTO_SHA256
> + select CRYPTO_SHA512
> + help
> + This provides support for NVMe over Fabrics In-Band Authentication
> + for the NVMe over TCP transport.

Not tcp specific...

> diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
> new file mode 100644
> index 000000000000..5393ac16a002
> --- /dev/null
> +++ b/drivers/nvme/host/auth.c
> @@ -0,0 +1,1285 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2020 Hannes Reinecke, SUSE Linux
> + */
> +
> +#include <linux/crc32.h>
> +#include <linux/base64.h>
> +#include <asm/unaligned.h>
> +#include <crypto/hash.h>
> +#include <crypto/dh.h>
> +#include <crypto/ffdhe.h>
> +#include "nvme.h"
> +#include "fabrics.h"
> +#include "auth.h"
> +
> +static u32 nvme_dhchap_seqnum;
> +
> +struct nvme_dhchap_queue_context {
> + struct list_head entry;
> + struct work_struct auth_work;
> + struct nvme_ctrl *ctrl;
> + struct crypto_shash *shash_tfm;
> + struct crypto_kpp *dh_tfm;
> + void *buf;
> + size_t buf_size;
> + int qid;
> + int error;
> + u32 s1;
> + u32 s2;
> + u16 transaction;
> + u8 status;
> + u8 hash_id;
> + u8 hash_len;
> + u8 dhgroup_id;
> + u8 c1[64];
> + u8 c2[64];
> + u8 response[64];
> + u8 *host_response;
> +};
> +
> +static struct nvme_auth_dhgroup_map {
> + int id;
> + const char name[16];
> + const char kpp[16];
> + int privkey_size;
> + int pubkey_size;
> +} dhgroup_map[] = {
> + { .id = NVME_AUTH_DHCHAP_DHGROUP_NULL,
> + .name = "NULL", .kpp = "NULL",

Nit, no need for all-caps, can do "null"

> + .privkey_size = 0, .pubkey_size = 0 },
> + { .id = NVME_AUTH_DHCHAP_DHGROUP_2048,
> + .name = "ffdhe2048", .kpp = "dh",
> + .privkey_size = 256, .pubkey_size = 256 },
> + { .id = NVME_AUTH_DHCHAP_DHGROUP_3072,
> + .name = "ffdhe3072", .kpp = "dh",
> + .privkey_size = 384, .pubkey_size = 384 },
> + { .id = NVME_AUTH_DHCHAP_DHGROUP_4096,
> + .name = "ffdhe4096", .kpp = "dh",
> + .privkey_size = 512, .pubkey_size = 512 },
> + { .id = NVME_AUTH_DHCHAP_DHGROUP_6144,
> + .name = "ffdhe6144", .kpp = "dh",
> + .privkey_size = 768, .pubkey_size = 768 },
> + { .id = NVME_AUTH_DHCHAP_DHGROUP_8192,
> + .name = "ffdhe8192", .kpp = "dh",
> + .privkey_size = 1024, .pubkey_size = 1024 },
> +};
> +
> +const char *nvme_auth_dhgroup_name(int dhgroup_id)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) {
> + if (dhgroup_map[i].id == dhgroup_id)
> + return dhgroup_map[i].name;
> + }
> + return NULL;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_name);
> +
> +int nvme_auth_dhgroup_pubkey_size(int dhgroup_id)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) {
> + if (dhgroup_map[i].id == dhgroup_id)
> + return dhgroup_map[i].pubkey_size;
> + }
> + return -1;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_pubkey_size);
> +
> +int nvme_auth_dhgroup_privkey_size(int dhgroup_id)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) {
> + if (dhgroup_map[i].id == dhgroup_id)
> + return dhgroup_map[i].privkey_size;
> + }
> + return -1;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_privkey_size);
> +
> +const char *nvme_auth_dhgroup_kpp(int dhgroup_id)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) {
> + if (dhgroup_map[i].id == dhgroup_id)
> + return dhgroup_map[i].kpp;
> + }
> + return NULL;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_kpp);
> +
> +int nvme_auth_dhgroup_id(const char *dhgroup_name)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) {
> + if (!strncmp(dhgroup_map[i].name, dhgroup_name,
> + strlen(dhgroup_map[i].name)))
> + return dhgroup_map[i].id;
> + }
> + return -1;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_id);
> +
> +static struct nvme_dhchap_hash_map {
> + int id;
> + const char hmac[15];
> + const char digest[15];
> +} hash_map[] = {
> + {.id = NVME_AUTH_DHCHAP_SHA256,
> + .hmac = "hmac(sha256)", .digest = "sha256" },
> + {.id = NVME_AUTH_DHCHAP_SHA384,
> + .hmac = "hmac(sha384)", .digest = "sha384" },
> + {.id = NVME_AUTH_DHCHAP_SHA512,
> + .hmac = "hmac(sha512)", .digest = "sha512" },
> +};
> +
> +const char *nvme_auth_hmac_name(int hmac_id)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(hash_map); i++) {
> + if (hash_map[i].id == hmac_id)
> + return hash_map[i].hmac;
> + }
> + return NULL;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_hmac_name);
> +
> +const char *nvme_auth_digest_name(int hmac_id)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(hash_map); i++) {
> + if (hash_map[i].id == hmac_id)
> + return hash_map[i].digest;
> + }
> + return NULL;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_digest_name);
> +
> +int nvme_auth_hmac_id(const char *hmac_name)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(hash_map); i++) {
> + if (!strncmp(hash_map[i].hmac, hmac_name,
> + strlen(hash_map[i].hmac)))
> + return hash_map[i].id;
> + }
> + return -1;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_hmac_id);
> +
> +unsigned char *nvme_auth_extract_secret(unsigned char *secret, size_t *out_len)
> +{
> + unsigned char *key;
> + u32 crc;
> + int key_len;
> + size_t allocated_len;
> +
> + allocated_len = strlen(secret);

Can move to declaration initializer.

> + key = kzalloc(allocated_len, GFP_KERNEL);
> + if (!key)
> + return ERR_PTR(-ENOMEM);
> +
> + key_len = base64_decode(secret, allocated_len, key);
> + if (key_len != 36 && key_len != 52 &&
> + key_len != 68) {
> + pr_debug("Invalid DH-HMAC-CHAP key len %d\n",
> + key_len);
> + kfree_sensitive(key);
> + return ERR_PTR(-EINVAL);
> + }
> +
> + /* The last four bytes is the CRC in little-endian format */
> + key_len -= 4;
> + /*
> + * The linux implementation doesn't do pre- and post-increments,
> + * so we have to do it manually.
> + */
> + crc = ~crc32(~0, key, key_len);
> +
> + if (get_unaligned_le32(key + key_len) != crc) {
> + pr_debug("DH-HMAC-CHAP key crc mismatch (key %08x, crc %08x)\n",
> + get_unaligned_le32(key + key_len), crc);
> + kfree_sensitive(key);
> + return ERR_PTR(-EKEYREJECTED);
> + }
> + *out_len = key_len;
> + return key;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_extract_secret);
> +
> +u8 *nvme_auth_transform_key(u8 *key, size_t key_len, u8 key_hash, char *nqn)
> +{
> + const char *hmac_name = nvme_auth_hmac_name(key_hash);
> + struct crypto_shash *key_tfm;
> + struct shash_desc *shash;
> + u8 *transformed_key;
> + int ret;
> +
> + /* No key transformation required */
> + if (key_hash == 0)
> + return 0;
> +
> + hmac_name = nvme_auth_hmac_name(key_hash);
> + if (!hmac_name) {
> + pr_warn("Invalid key hash id %d\n", key_hash);
> + return ERR_PTR(-EKEYREJECTED);
> + }

newline here.

> + key_tfm = crypto_alloc_shash(hmac_name, 0, 0);
> + if (IS_ERR(key_tfm))
> + return (u8 *)key_tfm;
> +
> + shash = kmalloc(sizeof(struct shash_desc) +
> + crypto_shash_descsize(key_tfm),
> + GFP_KERNEL);
> + if (!shash) {
> + crypto_free_shash(key_tfm);
> + return ERR_PTR(-ENOMEM);
> + }

newline here.

> + transformed_key = kzalloc(crypto_shash_digestsize(key_tfm), GFP_KERNEL);
> + if (!transformed_key) {
> + ret = -ENOMEM;
> + goto out_free_shash;
> + }
> +
> + shash->tfm = key_tfm;
> + ret = crypto_shash_setkey(key_tfm, key, key_len);
> + if (ret < 0)
> + goto out_free_shash;
> + ret = crypto_shash_init(shash);
> + if (ret < 0)
> + goto out_free_shash;
> + ret = crypto_shash_update(shash, nqn, strlen(nqn));
> + if (ret < 0)
> + goto out_free_shash;
> + ret = crypto_shash_update(shash, "NVMe-over-Fabrics", 17);
> + if (ret < 0)
> + goto out_free_shash;
> + ret = crypto_shash_final(shash, transformed_key);
> +out_free_shash:
> + kfree(shash);
> + crypto_free_shash(key_tfm);
> + if (ret < 0) {
> + kfree_sensitive(transformed_key);
> + return ERR_PTR(ret);
> + }

Any reason why this is not a reverse cleanup with goto call-sites
standard style?

> + return transformed_key;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_transform_key);
> +
> +static int nvme_auth_hash_skey(int hmac_id, u8 *skey, size_t skey_len, u8 *hkey)
> +{
> + const char *digest_name;
> + struct crypto_shash *tfm;
> + int ret;
> +
> + digest_name = nvme_auth_digest_name(hmac_id);
> + if (!digest_name) {
> + pr_debug("%s: failed to get digest for %d\n", __func__,
> + hmac_id);
> + return -EINVAL;
> + }
> + tfm = crypto_alloc_shash(digest_name, 0, 0);
> + if (IS_ERR(tfm))
> + return -ENOMEM;
> +
> + ret = crypto_shash_tfm_digest(tfm, skey, skey_len, hkey);
> + if (ret < 0)
> + pr_debug("%s: Failed to hash digest len %zu\n", __func__,
> + skey_len);
> +
> + crypto_free_shash(tfm);
> + return ret;
> +}
> +
> +int nvme_auth_augmented_challenge(u8 hmac_id, u8 *skey, size_t skey_len,
> + u8 *challenge, u8 *aug, size_t hlen)
> +{
> + struct crypto_shash *tfm;
> + struct shash_desc *desc;
> + u8 *hashed_key;
> + const char *hmac_name;
> + int ret;
> +
> + hashed_key = kmalloc(hlen, GFP_KERNEL);
> + if (!hashed_key)
> + return -ENOMEM;
> +
> + ret = nvme_auth_hash_skey(hmac_id, skey,
> + skey_len, hashed_key);
> + if (ret < 0)
> + goto out_free_key;
> +
> + hmac_name = nvme_auth_hmac_name(hmac_id);
> + if (!hmac_name) {
> + pr_warn("%s: invalid hash algoritm %d\n",
> + __func__, hmac_id);
> + ret = -EINVAL;
> + goto out_free_key;
> + }

newline.

> + tfm = crypto_alloc_shash(hmac_name, 0, 0);
> + if (IS_ERR(tfm)) {
> + ret = PTR_ERR(tfm);
> + goto out_free_key;
> + }

newline

> + desc = kmalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm),
> + GFP_KERNEL);
> + if (!desc) {
> + ret = -ENOMEM;
> + goto out_free_hash;
> + }
> + desc->tfm = tfm;
> +
> + ret = crypto_shash_setkey(tfm, hashed_key, hlen);
> + if (ret)
> + goto out_free_desc;
> +
> + ret = crypto_shash_init(desc);
> + if (ret)
> + goto out_free_desc;
> +
> + ret = crypto_shash_update(desc, challenge, hlen);
> + if (ret)
> + goto out_free_desc;
> +
> + ret = crypto_shash_final(desc, aug);
> +out_free_desc:
> + kfree_sensitive(desc);
> +out_free_hash:
> + crypto_free_shash(tfm);
> +out_free_key:
> + kfree_sensitive(hashed_key);
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_augmented_challenge);
> +
> +int nvme_auth_gen_privkey(struct crypto_kpp *dh_tfm, int dh_gid)
> +{
> + char *pkey;
> + int ret, pkey_len;
> +
> + if (dh_gid == NVME_AUTH_DHCHAP_DHGROUP_2048 ||
> + dh_gid == NVME_AUTH_DHCHAP_DHGROUP_3072 ||
> + dh_gid == NVME_AUTH_DHCHAP_DHGROUP_4096 ||
> + dh_gid == NVME_AUTH_DHCHAP_DHGROUP_6144 ||
> + dh_gid == NVME_AUTH_DHCHAP_DHGROUP_8192) {
> + struct dh p = {0};
> + int bits = nvme_auth_dhgroup_pubkey_size(dh_gid) << 3;
> + int dh_secret_len = 64;
> + u8 *dh_secret = kzalloc(dh_secret_len, GFP_KERNEL);
> +
> + if (!dh_secret)
> + return -ENOMEM;
> +
> + /*
> + * NVMe base spec v2.0: The DH value shall be set to the value
> + * of g^x mod p, where 'x' is a random number selected by the
> + * host that shall be at least 256 bits long.
> + *
> + * We will be using a 512 bit random number as private key.
> + * This is large enough to provide adequate security, but
> + * small enough such that we can trivially conform to
> + * NIST SB800-56A section 5.6.1.1.4 if
> + * we guarantee that the random number is not either
> + * all 0xff or all 0x00. But that should be guaranteed
> + * by the in-kernel RNG anyway.
> + */
> + get_random_bytes(dh_secret, dh_secret_len);
> +
> + ret = crypto_ffdhe_params(&p, bits);
> + if (ret) {
> + kfree_sensitive(dh_secret);
> + return ret;
> + }
> +
> + p.key = dh_secret;
> + p.key_size = dh_secret_len;
> +
> + pkey_len = crypto_dh_key_len(&p);
> + pkey = kmalloc(pkey_len, GFP_KERNEL);
> + if (!pkey) {
> + kfree_sensitive(dh_secret);
> + return -ENOMEM;
> + }
> +
> + get_random_bytes(pkey, pkey_len);
> + ret = crypto_dh_encode_key(pkey, pkey_len, &p);
> + if (ret) {
> + pr_debug("failed to encode private key, error %d\n",
> + ret);
> + kfree_sensitive(dh_secret);
> + goto out;
> + }
> + } else {
> + pr_warn("invalid dh group %d\n", dh_gid);
> + return -EINVAL;
> + }
> + ret = crypto_kpp_set_secret(dh_tfm, pkey, pkey_len);
> + if (ret)
> + pr_debug("failed to set private key, error %d\n", ret);
> +out:
> + kfree_sensitive(pkey);

pkey can be unset here.

> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_gen_privkey);
> +
> +int nvme_auth_gen_pubkey(struct crypto_kpp *dh_tfm,
> + u8 *host_key, size_t host_key_len)
> +{
> + struct kpp_request *req;
> + struct crypto_wait wait;
> + struct scatterlist dst;
> + int ret;
> +
> + req = kpp_request_alloc(dh_tfm, GFP_KERNEL);
> + if (!req)
> + return -ENOMEM;
> +
> + crypto_init_wait(&wait);
> + kpp_request_set_input(req, NULL, 0);
> + sg_init_one(&dst, host_key, host_key_len);
> + kpp_request_set_output(req, &dst, host_key_len);
> + kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
> + crypto_req_done, &wait);
> +
> + ret = crypto_wait_req(crypto_kpp_generate_public_key(req), &wait);
> +

no need for this newline

> + kpp_request_free(req);
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_gen_pubkey);
> +
> +int nvme_auth_gen_shared_secret(struct crypto_kpp *dh_tfm,
> + u8 *ctrl_key, size_t ctrl_key_len,
> + u8 *sess_key, size_t sess_key_len)
> +{
> + struct kpp_request *req;
> + struct crypto_wait wait;
> + struct scatterlist src, dst;
> + int ret;
> +
> + req = kpp_request_alloc(dh_tfm, GFP_KERNEL);
> + if (!req)
> + return -ENOMEM;
> +
> + crypto_init_wait(&wait);
> + sg_init_one(&src, ctrl_key, ctrl_key_len);
> + kpp_request_set_input(req, &src, ctrl_key_len);
> + sg_init_one(&dst, sess_key, sess_key_len);
> + kpp_request_set_output(req, &dst, sess_key_len);
> + kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
> + crypto_req_done, &wait);
> +
> + ret = crypto_wait_req(crypto_kpp_compute_shared_secret(req), &wait);
> +
> + kpp_request_free(req);
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_gen_shared_secret);
> +
> +static int nvme_auth_send(struct nvme_ctrl *ctrl, int qid,
> + void *data, size_t tl)
> +{
> + struct nvme_command cmd = {};
> + blk_mq_req_flags_t flags = qid == NVME_QID_ANY ?
> + 0 : BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED;
> + struct request_queue *q = qid == NVME_QID_ANY ?
> + ctrl->fabrics_q : ctrl->connect_q;
> + int ret;
> +
> + cmd.auth_send.opcode = nvme_fabrics_command;
> + cmd.auth_send.fctype = nvme_fabrics_type_auth_send;
> + cmd.auth_send.secp = NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER;
> + cmd.auth_send.spsp0 = 0x01;
> + cmd.auth_send.spsp1 = 0x01;
> + cmd.auth_send.tl = tl;
> +
> + ret = __nvme_submit_sync_cmd(q, &cmd, NULL, data, tl, 0, qid,
> + 0, flags);
> + if (ret > 0)
> + dev_dbg(ctrl->device,
> + "%s: qid %d nvme status %d\n", __func__, qid, ret);
> + else if (ret < 0)
> + dev_dbg(ctrl->device,
> + "%s: qid %d error %d\n", __func__, qid, ret);
> + return ret;
> +}
> +
> +static int nvme_auth_receive(struct nvme_ctrl *ctrl, int qid,
> + void *buf, size_t al)
> +{
> + struct nvme_command cmd = {};
> + blk_mq_req_flags_t flags = qid == NVME_QID_ANY ?
> + 0 : BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED;
> + struct request_queue *q = qid == NVME_QID_ANY ?
> + ctrl->fabrics_q : ctrl->connect_q;
> + int ret;
> +
> + cmd.auth_receive.opcode = nvme_fabrics_command;
> + cmd.auth_receive.fctype = nvme_fabrics_type_auth_receive;
> + cmd.auth_receive.secp = NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER;
> + cmd.auth_receive.spsp0 = 0x01;
> + cmd.auth_receive.spsp1 = 0x01;
> + cmd.auth_receive.al = al;
> +
> + ret = __nvme_submit_sync_cmd(q, &cmd, NULL, buf, al, 0, qid,
> + 0, flags);
> + if (ret > 0) {
> + dev_dbg(ctrl->device, "%s: qid %d nvme status %x\n",
> + __func__, qid, ret);
> + ret = -EIO;

Why EIO?

> + }
> + if (ret < 0) {
> + dev_dbg(ctrl->device, "%s: qid %d error %d\n",
> + __func__, qid, ret);
> + return ret;
> + }

Why did you choose to do these error conditionals differently for the
send and receive functions?

> +
> + return 0;
> +}
> +
> +static int nvme_auth_receive_validate(struct nvme_ctrl *ctrl, int qid,
> + struct nvmf_auth_dhchap_failure_data *data,
> + u16 transaction, u8 expected_msg)
> +{
> + dev_dbg(ctrl->device, "%s: qid %d auth_type %d auth_id %x\n",
> + __func__, qid, data->auth_type, data->auth_id);
> +
> + if (data->auth_type == NVME_AUTH_COMMON_MESSAGES &&
> + data->auth_id == NVME_AUTH_DHCHAP_MESSAGE_FAILURE1) {
> + return data->rescode_exp;
> + }
> + if (data->auth_type != NVME_AUTH_DHCHAP_MESSAGES ||
> + data->auth_id != expected_msg) {
> + dev_warn(ctrl->device,
> + "qid %d invalid message %02x/%02x\n",
> + qid, data->auth_type, data->auth_id);
> + return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_MESSAGE;
> + }
> + if (le16_to_cpu(data->t_id) != transaction) {
> + dev_warn(ctrl->device,
> + "qid %d invalid transaction ID %d\n",
> + qid, le16_to_cpu(data->t_id));
> + return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_MESSAGE;
> + }
> + return 0;
> +}
> +
> +static int nvme_auth_set_dhchap_negotiate_data(struct nvme_ctrl *ctrl,
> + struct nvme_dhchap_queue_context *chap)
> +{
> + struct nvmf_auth_dhchap_negotiate_data *data = chap->buf;
> + size_t size = sizeof(*data) + sizeof(union nvmf_auth_protocol);
> +
> + if (chap->buf_size < size) {
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;

Is this an internal error? not sure I understand setting of this status

> + return -EINVAL;
> + }
> + memset((u8 *)chap->buf, 0, size);
> + data->auth_type = NVME_AUTH_COMMON_MESSAGES;
> + data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE;
> + data->t_id = cpu_to_le16(chap->transaction);
> + data->sc_c = 0; /* No secure channel concatenation */
> + data->napd = 1;
> + data->auth_protocol[0].dhchap.authid = NVME_AUTH_DHCHAP_AUTH_ID;
> + data->auth_protocol[0].dhchap.halen = 3;
> + data->auth_protocol[0].dhchap.dhlen = 6;
> + data->auth_protocol[0].dhchap.idlist[0] = NVME_AUTH_DHCHAP_SHA256;
> + data->auth_protocol[0].dhchap.idlist[1] = NVME_AUTH_DHCHAP_SHA384;
> + data->auth_protocol[0].dhchap.idlist[2] = NVME_AUTH_DHCHAP_SHA512;
> + data->auth_protocol[0].dhchap.idlist[3] = NVME_AUTH_DHCHAP_DHGROUP_NULL;
> + data->auth_protocol[0].dhchap.idlist[4] = NVME_AUTH_DHCHAP_DHGROUP_2048;
> + data->auth_protocol[0].dhchap.idlist[5] = NVME_AUTH_DHCHAP_DHGROUP_3072;
> + data->auth_protocol[0].dhchap.idlist[6] = NVME_AUTH_DHCHAP_DHGROUP_4096;
> + data->auth_protocol[0].dhchap.idlist[7] = NVME_AUTH_DHCHAP_DHGROUP_6144;
> + data->auth_protocol[0].dhchap.idlist[8] = NVME_AUTH_DHCHAP_DHGROUP_8192;
> +
> + return size;
> +}
> +
> +static int nvme_auth_process_dhchap_challenge(struct nvme_ctrl *ctrl,
> + struct nvme_dhchap_queue_context *chap)
> +{
> + struct nvmf_auth_dhchap_challenge_data *data = chap->buf;
> + size_t size = sizeof(*data) + data->hl + data->dhvlen;
> + const char *hmac_name;
> +
> + if (chap->buf_size < size) {
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> + return NVME_SC_INVALID_FIELD;
> + }
> +
> + hmac_name = nvme_auth_hmac_name(data->hashid);
> + if (!hmac_name) {
> + dev_warn(ctrl->device,
> + "qid %d: invalid HASH ID %d\n",
> + chap->qid, data->hashid);
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE;
> + return -EPROTO;
> + }
> + if (chap->hash_id == data->hashid && chap->shash_tfm &&
> + !strcmp(crypto_shash_alg_name(chap->shash_tfm), hmac_name) &&
> + crypto_shash_digestsize(chap->shash_tfm) == data->hl) {
> + dev_dbg(ctrl->device,
> + "qid %d: reuse existing hash %s\n",
> + chap->qid, hmac_name);
> + goto select_kpp;
> + }

newline

> + if (chap->shash_tfm) {
> + crypto_free_shash(chap->shash_tfm);
> + chap->hash_id = 0;
> + chap->hash_len = 0;
> + }

newline

> + chap->shash_tfm = crypto_alloc_shash(hmac_name, 0,
> + CRYPTO_ALG_ALLOCATES_MEMORY);
> + if (IS_ERR(chap->shash_tfm)) {
> + dev_warn(ctrl->device,
> + "qid %d: failed to allocate hash %s, error %ld\n",
> + chap->qid, hmac_name, PTR_ERR(chap->shash_tfm));
> + chap->shash_tfm = NULL;
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_FAILED;
> + return NVME_SC_AUTH_REQUIRED;
> + }

newline

> + if (crypto_shash_digestsize(chap->shash_tfm) != data->hl) {
> + dev_warn(ctrl->device,
> + "qid %d: invalid hash length %d\n",
> + chap->qid, data->hl);
> + crypto_free_shash(chap->shash_tfm);
> + chap->shash_tfm = NULL;
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE;
> + return NVME_SC_AUTH_REQUIRED;
> + }

newline

> + if (chap->hash_id != data->hashid) {
> + kfree(chap->host_response);

kfree_sensitive? also why is is freed here? where was it allocated?

> + chap->host_response = NULL;
> + }
> + chap->hash_id = data->hashid;
> + chap->hash_len = data->hl;
> + dev_dbg(ctrl->device, "qid %d: selected hash %s\n",
> + chap->qid, hmac_name);
> +
> + gid_name = nvme_auth_dhgroup_kpp(data->dhgid);
> + if (!gid_name) {
> + dev_warn(ctrl->device,
> + "qid %d: invalid DH group id %d\n",
> + chap->qid, data->dhgid);
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
> + return -EPROTO;

No need for all the previous frees?
Maybe we can rework these such that we first do all the checks and then
go and allocate stuff?

> + }
> +
> + if (data->dhgid != NVME_AUTH_DHCHAP_DHGROUP_NULL) {
> + if (data->dhvlen == 0) {
> + dev_warn(ctrl->device,
> + "qid %d: empty DH value\n",
> + chap->qid);
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
> + return -EPROTO;
> + }
> + chap->dh_tfm = crypto_alloc_kpp(gid_name, 0, 0);
> + if (IS_ERR(chap->dh_tfm)) {
> + int ret = PTR_ERR(chap->dh_tfm);
> +
> + dev_warn(ctrl->device,
> + "qid %d: failed to initialize %s\n",
> + chap->qid, gid_name);
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
> + chap->dh_tfm = NULL;
> + return ret;
> + }
> + chap->dhgroup_id = data->dhgid;
> + } else if (data->dhvlen != 0) {
> + dev_warn(ctrl->device,
> + "qid %d: invalid DH value for NULL DH\n",
> + chap->qid);
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
> + return -EPROTO;
> + }
> + dev_dbg(ctrl->device, "qid %d: selected DH group %s\n",
> + chap->qid, gid_name);
> +
> +select_kpp:
> + chap->s1 = le32_to_cpu(data->seqnum);
> + memcpy(chap->c1, data->cval, chap->hash_len);
> +
> + return 0;
> +}
> +
> +static int nvme_auth_set_dhchap_reply_data(struct nvme_ctrl *ctrl,
> + struct nvme_dhchap_queue_context *chap)
> +{
> + struct nvmf_auth_dhchap_reply_data *data = chap->buf;
> + size_t size = sizeof(*data);
> +
> + size += 2 * chap->hash_len;
> + if (ctrl->opts->dhchap_bidi) {
> + get_random_bytes(chap->c2, chap->hash_len);
> + chap->s2 = nvme_dhchap_seqnum++;

Any serialization needed on nvme_dhchap_seqnum?

> + } else
> + memset(chap->c2, 0, chap->hash_len);
> +
> +
> + if (chap->buf_size < size) {
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> + return -EINVAL;
> + }
> + memset(chap->buf, 0, size);
> + data->auth_type = NVME_AUTH_DHCHAP_MESSAGES;
> + data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_REPLY;
> + data->t_id = cpu_to_le16(chap->transaction);
> + data->hl = chap->hash_len;
> + data->dhvlen = 0;
> + data->seqnum = cpu_to_le32(chap->s2);
> + memcpy(data->rval, chap->response, chap->hash_len);
> + if (ctrl->opts->dhchap_bidi) {

Can we unite the "if (ctrl->opts->dhchap_bidi)"
conditionals?

> + dev_dbg(ctrl->device, "%s: qid %d ctrl challenge %*ph\n",
> + __func__, chap->qid,
> + chap->hash_len, chap->c2);
> + data->cvalid = 1;
> + memcpy(data->rval + chap->hash_len, chap->c2,
> + chap->hash_len);
> + }
> + return size;
> +}
> +
> +static int nvme_auth_process_dhchap_success1(struct nvme_ctrl *ctrl,
> + struct nvme_dhchap_queue_context *chap)
> +{
> + struct nvmf_auth_dhchap_success1_data *data = chap->buf;
> + size_t size = sizeof(*data);
> +
> + if (ctrl->opts->dhchap_bidi)
> + size += chap->hash_len;
> +
> +
> + if (chap->buf_size < size) {
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> + return NVME_SC_INVALID_FIELD;
> + }
> +
> + if (data->hl != chap->hash_len) {
> + dev_warn(ctrl->device,
> + "qid %d: invalid hash length %d\n",
> + chap->qid, data->hl);
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE;
> + return NVME_SC_INVALID_FIELD;
> + }
> +
> + if (!data->rvalid)
> + return 0;
> +
> + /* Validate controller response */
> + if (memcmp(chap->response, data->rval, data->hl)) {
> + dev_dbg(ctrl->device, "%s: qid %d ctrl response %*ph\n",
> + __func__, chap->qid, chap->hash_len, data->rval);
> + dev_dbg(ctrl->device, "%s: qid %d host response %*ph\n",
> + __func__, chap->qid, chap->hash_len, chap->response);
> + dev_warn(ctrl->device,
> + "qid %d: controller authentication failed\n",
> + chap->qid);
> + chap->status = NVME_AUTH_DHCHAP_FAILURE_FAILED;
> + return NVME_SC_AUTH_REQUIRED;
> + }
> + dev_info(ctrl->device,
> + "qid %d: controller authenticated\n",
> + chap->qid);
> + return 0;
> +}
> +
> +static int nvme_auth_set_dhchap_success2_data(struct nvme_ctrl *ctrl,
> + struct nvme_dhchap_queue_context *chap)
> +{
> + struct nvmf_auth_dhchap_success2_data *data = chap->buf;
> + size_t size = sizeof(*data);
> +
> + memset(chap->buf, 0, size);
> + data->auth_type = NVME_AUTH_DHCHAP_MESSAGES;
> + data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2;
> + data->t_id = cpu_to_le16(chap->transaction);
> +
> + return size;
> +}
> +
> +static int nvme_auth_set_dhchap_failure2_data(struct nvme_ctrl *ctrl,
> + struct nvme_dhchap_queue_context *chap)
> +{
> + struct nvmf_auth_dhchap_failure_data *data = chap->buf;
> + size_t size = sizeof(*data);
> +
> + memset(chap->buf, 0, size);
> + data->auth_type = NVME_AUTH_DHCHAP_MESSAGES;
> + data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_FAILURE2;
> + data->t_id = cpu_to_le16(chap->transaction);
> + data->rescode = NVME_AUTH_DHCHAP_FAILURE_REASON_FAILED;
> + data->rescode_exp = chap->status;
> +
> + return size;
> +}
> +
> +static int nvme_auth_dhchap_host_response(struct nvme_ctrl *ctrl,
> + struct nvme_dhchap_queue_context *chap)
> +{
> + SHASH_DESC_ON_STACK(shash, chap->shash_tfm);
> + u8 buf[4], *challenge = chap->c1;
> + int ret;
> +
> + dev_dbg(ctrl->device, "%s: qid %d host response seq %d transaction %d\n",
> + __func__, chap->qid, chap->s1, chap->transaction);
> + if (chap->dh_tfm) {
> + challenge = kmalloc(chap->hash_len, GFP_KERNEL);
> + if (!challenge) {
> + ret = -ENOMEM;
> + goto out;
> + }
> + ret = nvme_auth_augmented_challenge(chap->hash_id,
> + chap->sess_key,
> + chap->sess_key_len,
> + chap->c1, challenge,
> + chap->hash_len);
> + if (ret)
> + goto out;
> + }
> + shash->tfm = chap->shash_tfm;
> + ret = crypto_shash_init(shash);
> + if (ret)
> + goto out;
> + ret = crypto_shash_update(shash, challenge, chap->hash_len);
> + if (ret)
> + goto out;
> + put_unaligned_le32(chap->s1, buf);
> + ret = crypto_shash_update(shash, buf, 4);
> + if (ret)
> + goto out;
> + put_unaligned_le16(chap->transaction, buf);
> + ret = crypto_shash_update(shash, buf, 2);
> + if (ret)
> + goto out;
> + memset(buf, 0, sizeof(buf));
> + ret = crypto_shash_update(shash, buf, 1);
> + if (ret)
> + goto out;
> + ret = crypto_shash_update(shash, "HostHost", 8);
> + if (ret)
> + goto out;
> + ret = crypto_shash_update(shash, ctrl->opts->host->nqn,
> + strlen(ctrl->opts->host->nqn));
> + if (ret)
> + goto out;
> + ret = crypto_shash_update(shash, buf, 1);
> + if (ret)
> + goto out;
> + ret = crypto_shash_update(shash, ctrl->opts->subsysnqn,
> + strlen(ctrl->opts->subsysnqn));
> + if (ret)
> + goto out;
> + ret = crypto_shash_final(shash, chap->response);
> +out:
> + if (challenge != chap->c1)
> + kfree(challenge);
> + return ret;
> +}
> +
> +static int nvme_auth_dhchap_ctrl_response(struct nvme_ctrl *ctrl,
> + struct nvme_dhchap_queue_context *chap)
> +{
> + SHASH_DESC_ON_STACK(shash, chap->shash_tfm);
> + u8 buf[4], *challenge = chap->c2;
> + int ret;
> +
> + if (chap->dh_tfm) {
> + challenge = kmalloc(chap->hash_len, GFP_KERNEL);
> + if (!challenge) {
> + ret = -ENOMEM;
> + goto out;
> + }
> + ret = nvme_auth_augmented_challenge(chap->hash_id,
> + chap->sess_key,
> + chap->sess_key_len,
> + chap->c2, challenge,
> + chap->hash_len);
> + if (ret)
> + goto out;
> + }
> + dev_dbg(ctrl->device, "%s: qid %d host response seq %d transaction %d\n",
> + __func__, chap->qid, chap->s2, chap->transaction);
> + dev_dbg(ctrl->device, "%s: qid %d challenge %*ph\n",
> + __func__, chap->qid, chap->hash_len, challenge);
> + dev_dbg(ctrl->device, "%s: qid %d subsysnqn %s\n",
> + __func__, chap->qid, ctrl->opts->subsysnqn);
> + dev_dbg(ctrl->device, "%s: qid %d hostnqn %s\n",
> + __func__, chap->qid, ctrl->opts->host->nqn);
> + shash->tfm = chap->shash_tfm;
> + ret = crypto_shash_init(shash);
> + if (ret)
> + goto out;
> + ret = crypto_shash_update(shash, challenge, chap->hash_len);
> + if (ret)
> + goto out;
> + put_unaligned_le32(chap->s2, buf);
> + ret = crypto_shash_update(shash, buf, 4);
> + if (ret)
> + goto out;
> + put_unaligned_le16(chap->transaction, buf);
> + ret = crypto_shash_update(shash, buf, 2);
> + if (ret)
> + goto out;
> + memset(buf, 0, 4);
> + ret = crypto_shash_update(shash, buf, 1);
> + if (ret)
> + goto out;
> + ret = crypto_shash_update(shash, "Controller", 10);
> + if (ret)
> + goto out;
> + ret = crypto_shash_update(shash, ctrl->opts->subsysnqn,
> + strlen(ctrl->opts->subsysnqn));
> + if (ret)
> + goto out;
> + ret = crypto_shash_update(shash, buf, 1);
> + if (ret)
> + goto out;
> + ret = crypto_shash_update(shash, ctrl->opts->host->nqn,
> + strlen(ctrl->opts->host->nqn));
> + if (ret)
> + goto out;
> + ret = crypto_shash_final(shash, chap->response);
> +out:
> + if (challenge != chap->c2)
> + kfree(challenge);
> + return ret;
> +}
> +
> +int nvme_auth_generate_key(struct nvme_ctrl *ctrl)
> +{
> + int ret;
> + u8 key_hash;
> +
> + if (!ctrl->opts->dhchap_secret)
> + return 0;
> +
> + if (ctrl->dhchap_key && ctrl->dhchap_key_len)
> + /* Key already set */
> + return 0;
> +
> + if (sscanf(ctrl->opts->dhchap_secret, "DHHC-1:%hhd:%*s:",
> + &key_hash) != 1)
> + return -EINVAL;
> +
> + /* Pass in the secret without the 'DHHC-1:XX:' prefix */
> + ctrl->dhchap_key = nvme_auth_extract_secret(ctrl->opts->dhchap_secret + 10,
> + &ctrl->dhchap_key_len);
> + if (IS_ERR(ctrl->dhchap_key)) {
> + ret = PTR_ERR(ctrl->dhchap_key);
> + ctrl->dhchap_key = NULL;
> + return ret;
> + }
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_generate_key);
> +
> +static void nvme_auth_reset(struct nvme_dhchap_queue_context *chap)
> +{
> + chap->status = 0;
> + chap->error = 0;
> + chap->s1 = 0;
> + chap->s2 = 0;
> + chap->transaction = 0;
> + memset(chap->c1, 0, sizeof(chap->c1));
> + memset(chap->c2, 0, sizeof(chap->c2));
> +}
> +
> +static void __nvme_auth_free(struct nvme_dhchap_queue_context *chap)
> +{
> + if (chap->shash_tfm)
> + crypto_free_shash(chap->shash_tfm);
> + kfree_sensitive(chap->host_response);
> + kfree(chap->buf);
> + kfree(chap);
> +}
> +
> +static void __nvme_auth_work(struct work_struct *work)
> +{
> + struct nvme_dhchap_queue_context *chap =
> + container_of(work, struct nvme_dhchap_queue_context, auth_work);
> + struct nvme_ctrl *ctrl = chap->ctrl;
> + size_t tl;
> + int ret = 0;
> +
> + chap->transaction = ctrl->transaction++;
> +
> + /* DH-HMAC-CHAP Step 1: send negotiate */
> + dev_dbg(ctrl->device, "%s: qid %d send negotiate\n",
> + __func__, chap->qid);
> + ret = nvme_auth_set_dhchap_negotiate_data(ctrl, chap);
> + if (ret < 0) {
> + chap->error = ret;
> + return;
> + }
> + tl = ret;
> + ret = nvme_auth_send(ctrl, chap->qid, chap->buf, tl);
> + if (ret) {
> + chap->error = ret;
> + return;
> + }
> +
> + /* DH-HMAC-CHAP Step 2: receive challenge */
> + dev_dbg(ctrl->device, "%s: qid %d receive challenge\n",
> + __func__, chap->qid);
> +
> + memset(chap->buf, 0, chap->buf_size);
> + ret = nvme_auth_receive(ctrl, chap->qid, chap->buf, chap->buf_size);
> + if (ret) {
> + dev_warn(ctrl->device,
> + "qid %d failed to receive challenge, %s %d\n",
> + chap->qid, ret < 0 ? "error" : "nvme status", ret);
> + chap->error = ret;
> + return;
> + }
> + ret = nvme_auth_receive_validate(ctrl, chap->qid, chap->buf, chap->transaction,
> + NVME_AUTH_DHCHAP_MESSAGE_CHALLENGE);
> + if (ret) {
> + chap->status = ret;
> + chap->error = NVME_SC_AUTH_REQUIRED;
> + return;
> + }
> +
> + ret = nvme_auth_process_dhchap_challenge(ctrl, chap);
> + if (ret) {
> + /* Invalid challenge parameters */
> + goto fail2;
> + }
> +
> + if (chap->ctrl_key_len) {
> + dev_dbg(ctrl->device,
> + "%s: qid %d DH exponential\n",
> + __func__, chap->qid);
> + ret = nvme_auth_dhchap_exponential(ctrl, chap);
> + if (ret)
> + goto fail2;
> + }
> +
> + dev_dbg(ctrl->device, "%s: qid %d host response\n",
> + __func__, chap->qid);
> + ret = nvme_auth_dhchap_host_response(ctrl, chap);
> + if (ret)
> + goto fail2;
> +
> + /* DH-HMAC-CHAP Step 3: send reply */
> + dev_dbg(ctrl->device, "%s: qid %d send reply\n",
> + __func__, chap->qid);
> + ret = nvme_auth_set_dhchap_reply_data(ctrl, chap);
> + if (ret < 0)
> + goto fail2;
> +
> + tl = ret;
> + ret = nvme_auth_send(ctrl, chap->qid, chap->buf, tl);
> + if (ret)
> + goto fail2;
> +
> + /* DH-HMAC-CHAP Step 4: receive success1 */
> + dev_dbg(ctrl->device, "%s: qid %d receive success1\n",
> + __func__, chap->qid);
> +
> + memset(chap->buf, 0, chap->buf_size);
> + ret = nvme_auth_receive(ctrl, chap->qid, chap->buf, chap->buf_size);
> + if (ret) {
> + dev_warn(ctrl->device,
> + "qid %d failed to receive success1, %s %d\n",
> + chap->qid, ret < 0 ? "error" : "nvme status", ret);
> + chap->error = ret;
> + return;
> + }
> + ret = nvme_auth_receive_validate(ctrl, chap->qid,
> + chap->buf, chap->transaction,
> + NVME_AUTH_DHCHAP_MESSAGE_SUCCESS1);
> + if (ret) {
> + chap->status = ret;
> + chap->error = NVME_SC_AUTH_REQUIRED;
> + return;
> + }
> +
> + if (ctrl->opts->dhchap_bidi) {
> + dev_dbg(ctrl->device,
> + "%s: qid %d controller response\n",
> + __func__, chap->qid);
> + ret = nvme_auth_dhchap_ctrl_response(ctrl, chap);
> + if (ret)
> + goto fail2;
> + }
> +
> + ret = nvme_auth_process_dhchap_success1(ctrl, chap);
> + if (ret < 0) {
> + /* Controller authentication failed */
> + goto fail2;
> + }
> +
> + /* DH-HMAC-CHAP Step 5: send success2 */
> + dev_dbg(ctrl->device, "%s: qid %d send success2\n",
> + __func__, chap->qid);
> + tl = nvme_auth_set_dhchap_success2_data(ctrl, chap);
> + ret = nvme_auth_send(ctrl, chap->qid, chap->buf, tl);
> + if (!ret) {
> + chap->error = 0;
> + return;
> + }
> +
> +fail2:
> + dev_dbg(ctrl->device, "%s: qid %d send failure2, status %x\n",
> + __func__, chap->qid, chap->status);
> + tl = nvme_auth_set_dhchap_failure2_data(ctrl, chap);
> + ret = nvme_auth_send(ctrl, chap->qid, chap->buf, tl);
> + if (!ret)
> + ret = -EPROTO;
> + chap->error = ret;
> +}
> +
> +int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid)
> +{
> + struct nvme_dhchap_queue_context *chap;
> +
> + if (!ctrl->dhchap_key || !ctrl->dhchap_key_len) {
> + dev_warn(ctrl->device, "qid %d: no key\n", qid);
> + return -ENOKEY;
> + }
> +
> + mutex_lock(&ctrl->dhchap_auth_mutex);
> + /* Check if the context is already queued */
> + list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
> + if (chap->qid == qid) {
> + mutex_unlock(&ctrl->dhchap_auth_mutex);
> + queue_work(nvme_wq, &chap->auth_work);
> + return 0;
> + }
> + }
> + chap = kzalloc(sizeof(*chap), GFP_KERNEL);
> + if (!chap) {
> + mutex_unlock(&ctrl->dhchap_auth_mutex);
> + return -ENOMEM;
> + }
> + chap->qid = qid;
> + chap->ctrl = ctrl;
> +
> + /*
> + * Allocate a large enough buffer for the entire negotiation:
> + * 4k should be enough to ffdhe8192.
> + */
> + chap->buf_size = 4096;
> + chap->buf = kzalloc(chap->buf_size, GFP_KERNEL);
> + if (!chap->buf) {
> + mutex_unlock(&ctrl->dhchap_auth_mutex);
> + kfree(chap);
> + return -ENOMEM;
> + }
> +
> + INIT_WORK(&chap->auth_work, __nvme_auth_work);
> + list_add(&chap->entry, &ctrl->dhchap_auth_list);
> + mutex_unlock(&ctrl->dhchap_auth_mutex);
> + queue_work(nvme_wq, &chap->auth_work);

Why is the auth in a work? e.g. it won't fail the connect?

> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_negotiate);
> +
> +int nvme_auth_wait(struct nvme_ctrl *ctrl, int qid)
> +{
> + struct nvme_dhchap_queue_context *chap;
> + int ret;
> +
> + mutex_lock(&ctrl->dhchap_auth_mutex);
> + list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
> + if (chap->qid != qid)
> + continue;
> + mutex_unlock(&ctrl->dhchap_auth_mutex);
> + flush_work(&chap->auth_work);
> + ret = chap->error;
> + nvme_auth_reset(chap);
> + return ret;
> + }
> + mutex_unlock(&ctrl->dhchap_auth_mutex);
> + return -ENXIO;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_wait);
> +
> +/* Assumes that the controller is in state RESETTING */
> +static void nvme_dhchap_auth_work(struct work_struct *work)
> +{
> + struct nvme_ctrl *ctrl =
> + container_of(work, struct nvme_ctrl, dhchap_auth_work);
> + int ret, q;
> +
> + nvme_stop_queues(ctrl);
> + /* Authenticate admin queue first */
> + ret = nvme_auth_negotiate(ctrl, NVME_QID_ANY);
> + if (ret) {
> + dev_warn(ctrl->device,
> + "qid 0: error %d setting up authentication\n", ret);
> + goto out;
> + }
> + ret = nvme_auth_wait(ctrl, NVME_QID_ANY);
> + if (ret) {
> + dev_warn(ctrl->device,
> + "qid 0: authentication failed\n");
> + goto out;
> + }
> + dev_info(ctrl->device, "qid 0: authenticated\n");
> +
> + for (q = 1; q < ctrl->queue_count; q++) {
> + ret = nvme_auth_negotiate(ctrl, q);
> + if (ret) {
> + dev_warn(ctrl->device,
> + "qid %d: error %d setting up authentication\n",
> + q, ret);
> + goto out;
> + }
> + }
> +out:
> + /*
> + * Failure is a soft-state; credentials remain valid until
> + * the controller terminates the connection.
> + */
> + if (nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE))
> + nvme_start_queues(ctrl);
> +}
> +
> +void nvme_auth_init_ctrl(struct nvme_ctrl *ctrl)
> +{
> + INIT_LIST_HEAD(&ctrl->dhchap_auth_list);
> + INIT_WORK(&ctrl->dhchap_auth_work, nvme_dhchap_auth_work);
> + mutex_init(&ctrl->dhchap_auth_mutex);
> + nvme_auth_generate_key(ctrl);
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_init_ctrl);
> +
> +void nvme_auth_stop(struct nvme_ctrl *ctrl)
> +{
> + struct nvme_dhchap_queue_context *chap = NULL, *tmp;
> +
> + cancel_work_sync(&ctrl->dhchap_auth_work);
> + mutex_lock(&ctrl->dhchap_auth_mutex);
> + list_for_each_entry_safe(chap, tmp, &ctrl->dhchap_auth_list, entry)
> + cancel_work_sync(&chap->auth_work);
> + mutex_unlock(&ctrl->dhchap_auth_mutex);
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_stop);
> +
> +void nvme_auth_free(struct nvme_ctrl *ctrl)
> +{
> + struct nvme_dhchap_queue_context *chap = NULL, *tmp;
> +
> + mutex_lock(&ctrl->dhchap_auth_mutex);
> + list_for_each_entry_safe(chap, tmp, &ctrl->dhchap_auth_list, entry) {
> + list_del_init(&chap->entry);
> + flush_work(&chap->auth_work);
> + __nvme_auth_free(chap);
> + }
> + mutex_unlock(&ctrl->dhchap_auth_mutex);
> + kfree(ctrl->dhchap_key);
> + ctrl->dhchap_key = NULL;
> + ctrl->dhchap_key_len = 0;
> +}
> +EXPORT_SYMBOL_GPL(nvme_auth_free);
> diff --git a/drivers/nvme/host/auth.h b/drivers/nvme/host/auth.h
> new file mode 100644
> index 000000000000..cf1255f9db6d
> --- /dev/null
> +++ b/drivers/nvme/host/auth.h
> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2021 Hannes Reinecke, SUSE Software Solutions
> + */
> +
> +#ifndef _NVME_AUTH_H
> +#define _NVME_AUTH_H
> +
> +#include <crypto/kpp.h>
> +
> +const char *nvme_auth_dhgroup_name(int dhgroup_id);
> +int nvme_auth_dhgroup_pubkey_size(int dhgroup_id);
> +int nvme_auth_dhgroup_privkey_size(int dhgroup_id);
> +const char *nvme_auth_dhgroup_kpp(int dhgroup_id);
> +int nvme_auth_dhgroup_id(const char *dhgroup_name);
> +
> +const char *nvme_auth_hmac_name(int hmac_id);
> +const char *nvme_auth_digest_name(int hmac_id);
> +int nvme_auth_hmac_id(const char *hmac_name);
> +
> +unsigned char *nvme_auth_extract_secret(unsigned char *dhchap_secret,
> + size_t *dhchap_key_len);
> +u8 *nvme_auth_transform_key(u8 *key, size_t key_len, u8 key_hash, char *nqn);
> +
> +#endif /* _NVME_AUTH_H */
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 7efb31b87f37..f669b054790b 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -24,6 +24,7 @@
>
> #include "nvme.h"
> #include "fabrics.h"
> +#include "auth.h"
>
> #define CREATE_TRACE_POINTS
> #include "trace.h"
> @@ -322,6 +323,7 @@ enum nvme_disposition {
> COMPLETE,
> RETRY,
> FAILOVER,
> + AUTHENTICATE,
> };
>
> static inline enum nvme_disposition nvme_decide_disposition(struct request *req)
> @@ -329,6 +331,9 @@ static inline enum nvme_disposition nvme_decide_disposition(struct request *req)
> if (likely(nvme_req(req)->status == 0))
> return COMPLETE;
>
> + if ((nvme_req(req)->status & 0x7ff) == NVME_SC_AUTH_REQUIRED)
> + return AUTHENTICATE;
> +
> if (blk_noretry_request(req) ||
> (nvme_req(req)->status & NVME_SC_DNR) ||
> nvme_req(req)->retries >= nvme_max_retries)
> @@ -361,11 +366,13 @@ static inline void nvme_end_req(struct request *req)
>
> void nvme_complete_rq(struct request *req)
> {
> + struct nvme_ctrl *ctrl = nvme_req(req)->ctrl;
> +
> trace_nvme_complete_rq(req);
> nvme_cleanup_cmd(req);
>
> - if (nvme_req(req)->ctrl->kas)
> - nvme_req(req)->ctrl->comp_seen = true;
> + if (ctrl->kas)
> + ctrl->comp_seen = true;
>
> switch (nvme_decide_disposition(req)) {
> case COMPLETE:
> @@ -377,6 +384,15 @@ void nvme_complete_rq(struct request *req)
> case FAILOVER:
> nvme_failover_req(req);
> return;
> + case AUTHENTICATE:
> +#ifdef CONFIG_NVME_AUTH
> + if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> + queue_work(nvme_wq, &ctrl->dhchap_auth_work);

Why is the state change here and not in nvme_dhchap_auth_work?

> + nvme_retry_req(req);
> +#else
> + nvme_end_req(req);
> +#endif
> + return;
> }
> }
> EXPORT_SYMBOL_GPL(nvme_complete_rq);
> @@ -707,7 +723,9 @@ bool __nvme_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
> switch (ctrl->state) {
> case NVME_CTRL_CONNECTING:
> if (blk_rq_is_passthrough(rq) && nvme_is_fabrics(req->cmd) &&
> - req->cmd->fabrics.fctype == nvme_fabrics_type_connect)
> + (req->cmd->fabrics.fctype == nvme_fabrics_type_connect ||
> + req->cmd->fabrics.fctype == nvme_fabrics_type_auth_send ||
> + req->cmd->fabrics.fctype == nvme_fabrics_type_auth_receive))

What happens if the auth command comes before the connect (say in case
of ctrl reset when auth was already queued but not yet executed?

> return true;
> break;
> default:
> @@ -3458,6 +3476,51 @@ static ssize_t nvme_ctrl_fast_io_fail_tmo_store(struct device *dev,
> static DEVICE_ATTR(fast_io_fail_tmo, S_IRUGO | S_IWUSR,
> nvme_ctrl_fast_io_fail_tmo_show, nvme_ctrl_fast_io_fail_tmo_store);
>
> +#ifdef CONFIG_NVME_AUTH
> +static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
> + struct nvmf_ctrl_options *opts = ctrl->opts;
> +
> + if (!opts->dhchap_secret)
> + return sysfs_emit(buf, "none\n");
> + return sysfs_emit(buf, "%s\n", opts->dhchap_secret);

Should we actually show this? don't know enough how much the secret
should be kept a secret...

> +}
> +
> +static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev,
> + struct device_attribute *attr, const char *buf, size_t count)
> +{
> + struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
> + struct nvmf_ctrl_options *opts = ctrl->opts;
> + char *dhchap_secret;
> +
> + if (!ctrl->opts->dhchap_secret)
> + return -EINVAL;
> + if (count < 7)
> + return -EINVAL;
> + if (memcmp(buf, "DHHC-1:", 7))
> + return -EINVAL;
> +
> + dhchap_secret = kzalloc(count + 1, GFP_KERNEL);
> + if (!dhchap_secret)
> + return -ENOMEM;
> + memcpy(dhchap_secret, buf, count);
> + if (strcmp(dhchap_secret, opts->dhchap_secret)) {
> + kfree(opts->dhchap_secret);
> + opts->dhchap_secret = dhchap_secret;
> + /* Key has changed; reset authentication data */
> + nvme_auth_free(ctrl);
> + nvme_auth_generate_key(ctrl);
> + }

Nice, worth a comment "/* Re-authentication with new secret */"

> + if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> + queue_work(nvme_wq, &ctrl->dhchap_auth_work);
> + return count;
> +}
> +DEVICE_ATTR(dhchap_secret, S_IRUGO | S_IWUSR,
> + nvme_ctrl_dhchap_secret_show, nvme_ctrl_dhchap_secret_store);
> +#endif
> +
> static struct attribute *nvme_dev_attrs[] = {
> &dev_attr_reset_controller.attr,
> &dev_attr_rescan_controller.attr,
> @@ -3479,6 +3542,9 @@ static struct attribute *nvme_dev_attrs[] = {
> &dev_attr_reconnect_delay.attr,
> &dev_attr_fast_io_fail_tmo.attr,
> &dev_attr_kato.attr,
> +#ifdef CONFIG_NVME_AUTH
> + &dev_attr_dhchap_secret.attr,
> +#endif
> NULL
> };
>
> @@ -3502,6 +3568,10 @@ static umode_t nvme_dev_attrs_are_visible(struct kobject *kobj,
> return 0;
> if (a == &dev_attr_fast_io_fail_tmo.attr && !ctrl->opts)
> return 0;
> +#ifdef CONFIG_NVME_AUTH
> + if (a == &dev_attr_dhchap_secret.attr && !ctrl->opts)
> + return 0;
> +#endif
>
> return a->mode;
> }
> @@ -4312,6 +4382,7 @@ EXPORT_SYMBOL_GPL(nvme_complete_async_event);
> void nvme_stop_ctrl(struct nvme_ctrl *ctrl)
> {
> nvme_mpath_stop(ctrl);
> + nvme_auth_stop(ctrl);
> nvme_stop_keep_alive(ctrl);
> nvme_stop_failfast_work(ctrl);
> flush_work(&ctrl->async_event_work);
> @@ -4366,6 +4437,7 @@ static void nvme_free_ctrl(struct device *dev)
>
> nvme_free_cels(ctrl);
> nvme_mpath_uninit(ctrl);
> + nvme_auth_free(ctrl);
> __free_page(ctrl->discard_page);
>
> if (subsys) {
> @@ -4456,6 +4528,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
>
> nvme_fault_inject_init(&ctrl->fault_inject, dev_name(ctrl->device));
> nvme_mpath_init_ctrl(ctrl);
> + nvme_auth_init_ctrl(ctrl);
>
> return 0;
> out_free_name:
> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
> index 9a8eade7cd23..ee6058c24743 100644
> --- a/drivers/nvme/host/fabrics.c
> +++ b/drivers/nvme/host/fabrics.c
> @@ -370,6 +370,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
> union nvme_result res;
> struct nvmf_connect_data *data;
> int ret;
> + u32 result;
>
> cmd.connect.opcode = nvme_fabrics_command;
> cmd.connect.fctype = nvme_fabrics_type_connect;
> @@ -402,8 +403,25 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
> goto out_free_data;
> }
>
> - ctrl->cntlid = le16_to_cpu(res.u16);
> -
> + result = le32_to_cpu(res.u32);
> + ctrl->cntlid = result & 0xFFFF;
> + if ((result >> 16) & 2) {
> + /* Authentication required */
> + ret = nvme_auth_negotiate(ctrl, NVME_QID_ANY);
> + if (ret) {
> + dev_warn(ctrl->device,
> + "qid 0: failed to setup authentication\n");
> + ret = NVME_SC_AUTH_REQUIRED;
> + goto out_free_data;
> + }
> + ret = nvme_auth_wait(ctrl, NVME_QID_ANY);
> + if (ret)
> + dev_warn(ctrl->device,
> + "qid 0: authentication failed\n");
> + else
> + dev_info(ctrl->device,
> + "qid 0: authenticated\n");

OK, so the auth work is serialized via nvme_auth_wait here... got it..

> + }
> out_free_data:
> kfree(data);
> return ret;
> @@ -436,6 +454,7 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
> struct nvmf_connect_data *data;
> union nvme_result res;
> int ret;
> + u32 result;
>
> cmd.connect.opcode = nvme_fabrics_command;
> cmd.connect.fctype = nvme_fabrics_type_connect;
> @@ -461,6 +480,24 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
> nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
> &cmd, data);
> }
> + result = le32_to_cpu(res.u32);
> + if ((result >> 16) & 2) {
> + /* Authentication required */
> + ret = nvme_auth_negotiate(ctrl, qid);
> + if (ret) {
> + dev_warn(ctrl->device,
> + "qid %d: failed to setup authentication\n", qid);
> + ret = NVME_SC_AUTH_REQUIRED;
> + } else {
> + ret = nvme_auth_wait(ctrl, qid);
> + if (ret)
> + dev_warn(ctrl->device,
> + "qid %u: authentication failed\n", qid);
> + else
> + dev_info(ctrl->device,
> + "qid %u: authenticated\n", qid);
> + }
> + }
> kfree(data);
> return ret;
> }
> @@ -552,6 +589,8 @@ static const match_table_t opt_tokens = {
> { NVMF_OPT_NR_POLL_QUEUES, "nr_poll_queues=%d" },
> { NVMF_OPT_TOS, "tos=%d" },
> { NVMF_OPT_FAIL_FAST_TMO, "fast_io_fail_tmo=%d" },
> + { NVMF_OPT_DHCHAP_SECRET, "dhchap_secret=%s" },
> + { NVMF_OPT_DHCHAP_BIDI, "dhchap_bidi" },
> { NVMF_OPT_ERR, NULL }
> };
>
> @@ -827,6 +866,23 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
> }
> opts->tos = token;
> break;
> + case NVMF_OPT_DHCHAP_SECRET:
> + p = match_strdup(args);
> + if (!p) {
> + ret = -ENOMEM;
> + goto out;
> + }
> + if (strlen(p) < 11 || strncmp(p, "DHHC-1:", 7)) {
> + pr_err("Invalid DH-CHAP secret %s\n", p);
> + ret = -EINVAL;
> + goto out;
> + }
> + kfree(opts->dhchap_secret);
> + opts->dhchap_secret = p;
> + break;
> + case NVMF_OPT_DHCHAP_BIDI:
> + opts->dhchap_bidi = true;
> + break;
> default:
> pr_warn("unknown parameter or missing value '%s' in ctrl creation request\n",
> p);
> @@ -945,6 +1001,7 @@ void nvmf_free_options(struct nvmf_ctrl_options *opts)
> kfree(opts->subsysnqn);
> kfree(opts->host_traddr);
> kfree(opts->host_iface);
> + kfree(opts->dhchap_secret);
> kfree(opts);
> }
> EXPORT_SYMBOL_GPL(nvmf_free_options);
> @@ -954,7 +1011,10 @@ EXPORT_SYMBOL_GPL(nvmf_free_options);
> NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \
> NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\
> NVMF_OPT_DISABLE_SQFLOW |\
> - NVMF_OPT_FAIL_FAST_TMO)
> + NVMF_OPT_CTRL_LOSS_TMO |\
> + NVMF_OPT_FAIL_FAST_TMO |\
> + NVMF_OPT_DHCHAP_SECRET |\
> + NVMF_OPT_DHCHAP_BIDI)
>
> static struct nvme_ctrl *
> nvmf_create_ctrl(struct device *dev, const char *buf)
> @@ -1171,7 +1231,14 @@ static void __exit nvmf_exit(void)
> BUILD_BUG_ON(sizeof(struct nvmf_connect_command) != 64);
> BUILD_BUG_ON(sizeof(struct nvmf_property_get_command) != 64);
> BUILD_BUG_ON(sizeof(struct nvmf_property_set_command) != 64);
> + BUILD_BUG_ON(sizeof(struct nvmf_auth_send_command) != 64);
> + BUILD_BUG_ON(sizeof(struct nvmf_auth_receive_command) != 64);
> BUILD_BUG_ON(sizeof(struct nvmf_connect_data) != 1024);
> + BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_negotiate_data) != 8);
> + BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_challenge_data) != 16);
> + BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_reply_data) != 16);
> + BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_success1_data) != 16);
> + BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_success2_data) != 16);
> }
>
> MODULE_LICENSE("GPL v2");
> diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
> index a146cb903869..27df1aac5736 100644
> --- a/drivers/nvme/host/fabrics.h
> +++ b/drivers/nvme/host/fabrics.h
> @@ -67,6 +67,8 @@ enum {
> NVMF_OPT_TOS = 1 << 19,
> NVMF_OPT_FAIL_FAST_TMO = 1 << 20,
> NVMF_OPT_HOST_IFACE = 1 << 21,
> + NVMF_OPT_DHCHAP_SECRET = 1 << 22,
> + NVMF_OPT_DHCHAP_BIDI = 1 << 23,
> };
>
> /**
> @@ -96,6 +98,8 @@ enum {
> * @max_reconnects: maximum number of allowed reconnect attempts before removing
> * the controller, (-1) means reconnect forever, zero means remove
> * immediately;
> + * @dhchap_secret: DH-HMAC-CHAP secret
> + * @dhchap_bidi: enable DH-HMAC-CHAP bi-directional authentication
> * @disable_sqflow: disable controller sq flow control
> * @hdr_digest: generate/verify header digest (TCP)
> * @data_digest: generate/verify data digest (TCP)
> @@ -120,6 +124,8 @@ struct nvmf_ctrl_options {
> unsigned int kato;
> struct nvmf_host *host;
> int max_reconnects;
> + char *dhchap_secret;
> + bool dhchap_bidi;
> bool disable_sqflow;
> bool hdr_digest;
> bool data_digest;
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index 9871c0c9374c..b0dcb7d79b9e 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -318,6 +318,15 @@ struct nvme_ctrl {
> struct work_struct ana_work;
> #endif
>
> +#ifdef CONFIG_NVME_AUTH
> + struct work_struct dhchap_auth_work;
> + struct list_head dhchap_auth_list;
> + struct mutex dhchap_auth_mutex;
> + unsigned char *dhchap_key;
> + size_t dhchap_key_len;
> + u16 transaction;
> +#endif
> +
> /* Power saving configuration */
> u64 ps_max_latency_us;
> bool apst_enabled;
> @@ -885,6 +894,27 @@ static inline bool nvme_ctrl_sgl_supported(struct nvme_ctrl *ctrl)
> return ctrl->sgls & ((1 << 0) | (1 << 1));
> }
>
> +#ifdef CONFIG_NVME_AUTH
> +void nvme_auth_init_ctrl(struct nvme_ctrl *ctrl);
> +void nvme_auth_stop(struct nvme_ctrl *ctrl);
> +int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid);
> +int nvme_auth_wait(struct nvme_ctrl *ctrl, int qid);
> +void nvme_auth_free(struct nvme_ctrl *ctrl);
> +int nvme_auth_generate_key(struct nvme_ctrl *ctrl);
> +#else
> +static inline void nvme_auth_init_ctrl(struct nvme_ctrl *ctrl) {};
> +static inline void nvme_auth_stop(struct nvme_ctrl *ctrl) {};
> +static inline int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid)
> +{
> + return -EPROTONOSUPPORT;
> +}
> +static inline int nvme_auth_wait(struct nvme_ctrl *ctrl, int qid)
> +{
> + return NVME_SC_AUTH_REQUIRED;
> +}
> +static inline void nvme_auth_free(struct nvme_ctrl *ctrl) {};
> +#endif
> +
> u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
> u8 opcode);
> int nvme_execute_passthru_rq(struct request *rq);
> diff --git a/drivers/nvme/host/trace.c b/drivers/nvme/host/trace.c
> index 2a89c5aa0790..1c36fcedea20 100644
> --- a/drivers/nvme/host/trace.c
> +++ b/drivers/nvme/host/trace.c
> @@ -287,6 +287,34 @@ static const char *nvme_trace_fabrics_property_get(struct trace_seq *p, u8 *spc)
> return ret;
> }
>
> +static const char *nvme_trace_fabrics_auth_send(struct trace_seq *p, u8 *spc)
> +{
> + const char *ret = trace_seq_buffer_ptr(p);
> + u8 spsp0 = spc[1];
> + u8 spsp1 = spc[2];
> + u8 secp = spc[3];
> + u32 tl = get_unaligned_le32(spc + 4);
> +
> + trace_seq_printf(p, "spsp0=%02x, spsp1=%02x, secp=%02x, tl=%u",
> + spsp0, spsp1, secp, tl);
> + trace_seq_putc(p, 0);
> + return ret;
> +}
> +
> +static const char *nvme_trace_fabrics_auth_receive(struct trace_seq *p, u8 *spc)
> +{
> + const char *ret = trace_seq_buffer_ptr(p);
> + u8 spsp0 = spc[1];
> + u8 spsp1 = spc[2];
> + u8 secp = spc[3];
> + u32 al = get_unaligned_le32(spc + 4);
> +
> + trace_seq_printf(p, "spsp0=%02x, spsp1=%02x, secp=%02x, al=%u",
> + spsp0, spsp1, secp, al);
> + trace_seq_putc(p, 0);
> + return ret;
> +}
> +
> static const char *nvme_trace_fabrics_common(struct trace_seq *p, u8 *spc)
> {
> const char *ret = trace_seq_buffer_ptr(p);
> @@ -306,6 +334,10 @@ const char *nvme_trace_parse_fabrics_cmd(struct trace_seq *p,
> return nvme_trace_fabrics_connect(p, spc);
> case nvme_fabrics_type_property_get:
> return nvme_trace_fabrics_property_get(p, spc);
> + case nvme_fabrics_type_auth_send:
> + return nvme_trace_fabrics_auth_send(p, spc);
> + case nvme_fabrics_type_auth_receive:
> + return nvme_trace_fabrics_auth_receive(p, spc);
> default:
> return nvme_trace_fabrics_common(p, spc);
> }
>

2021-09-13 14:37:17

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 07/12] nvme: Implement In-Band authentication

On 9/13/21 3:55 PM, Sagi Grimberg wrote:
>
>
> On 9/10/21 9:43 AM, Hannes Reinecke wrote:
>> Implement NVMe-oF In-Band authentication according to NVMe TPAR 8006.
>> This patch adds two new fabric options 'dhchap_secret' to specify the
>> pre-shared key (in ASCII respresentation according to NVMe 2.0 section
>> 8.13.5.8 'Secret representation') and 'dhchap_bidi' to request
>> bi-directional
>> authentication of both the host and the controller.
>> Re-authentication can be triggered by writing the PSK into the new
>> controller sysfs attribute 'dhchap_secret'.
>>
>> Signed-off-by: Hannes Reinecke <[email protected]>
>> ---
>>   drivers/nvme/host/Kconfig   |   12 +
>>   drivers/nvme/host/Makefile  |    1 +
>>   drivers/nvme/host/auth.c    | 1285 +++++++++++++++++++++++++++++++++++
>>   drivers/nvme/host/auth.h    |   25 +
>>   drivers/nvme/host/core.c    |   79 ++-
>>   drivers/nvme/host/fabrics.c |   73 +-
>>   drivers/nvme/host/fabrics.h |    6 +
>>   drivers/nvme/host/nvme.h    |   30 +
>>   drivers/nvme/host/trace.c   |   32 +
>>   9 files changed, 1537 insertions(+), 6 deletions(-)
>>   create mode 100644 drivers/nvme/host/auth.c
>>   create mode 100644 drivers/nvme/host/auth.h
>>
>> diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig
>> index dc0450ca23a3..97e8412dc42d 100644
>> --- a/drivers/nvme/host/Kconfig
>> +++ b/drivers/nvme/host/Kconfig
>> @@ -83,3 +83,15 @@ config NVME_TCP
>>         from https://github.com/linux-nvme/nvme-cli.
>>           If unsure, say N.
>> +
>> +config NVME_AUTH
>> +    bool "NVM Express over Fabrics In-Band Authentication"
>> +    depends on NVME_CORE
>> +    select CRYPTO_HMAC
>> +    select CRYPTO_SHA256
>> +    select CRYPTO_SHA512
>> +    help
>> +      This provides support for NVMe over Fabrics In-Band Authentication
>> +      for the NVMe over TCP transport.
>
> Not tcp specific...
>
>> diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
>> new file mode 100644
>> index 000000000000..5393ac16a002
>> --- /dev/null
>> +++ b/drivers/nvme/host/auth.c
>> @@ -0,0 +1,1285 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (c) 2020 Hannes Reinecke, SUSE Linux
>> + */
>> +
>> +#include <linux/crc32.h>
>> +#include <linux/base64.h>
>> +#include <asm/unaligned.h>
>> +#include <crypto/hash.h>
>> +#include <crypto/dh.h>
>> +#include <crypto/ffdhe.h>
>> +#include "nvme.h"
>> +#include "fabrics.h"
>> +#include "auth.h"
>> +
>> +static u32 nvme_dhchap_seqnum;
>> +
>> +struct nvme_dhchap_queue_context {
>> +    struct list_head entry;
>> +    struct work_struct auth_work;
>> +    struct nvme_ctrl *ctrl;
>> +    struct crypto_shash *shash_tfm;
>> +    struct crypto_kpp *dh_tfm;
>> +    void *buf;
>> +    size_t buf_size;
>> +    int qid;
>> +    int error;
>> +    u32 s1;
>> +    u32 s2;
>> +    u16 transaction;
>> +    u8 status;
>> +    u8 hash_id;
>> +    u8 hash_len;
>> +    u8 dhgroup_id;
>> +    u8 c1[64];
>> +    u8 c2[64];
>> +    u8 response[64];
>> +    u8 *host_response;
>> +};
>> +
>> +static struct nvme_auth_dhgroup_map {
>> +    int id;
>> +    const char name[16];
>> +    const char kpp[16];
>> +    int privkey_size;
>> +    int pubkey_size;
>> +} dhgroup_map[] = {
>> +    { .id = NVME_AUTH_DHCHAP_DHGROUP_NULL,
>> +      .name = "NULL", .kpp = "NULL",
>
> Nit, no need for all-caps, can do "null"
>
Right. Will be doing so.

[ .. ]
>> +unsigned char *nvme_auth_extract_secret(unsigned char *secret, size_t
>> *out_len)
>> +{
>> +    unsigned char *key;
>> +    u32 crc;
>> +    int key_len;
>> +    size_t allocated_len;
>> +
>> +    allocated_len = strlen(secret);
>
> Can move to declaration initializer.
>
Sure.

>> +    key = kzalloc(allocated_len, GFP_KERNEL);
>> +    if (!key)
>> +        return ERR_PTR(-ENOMEM);
>> +
>> +    key_len = base64_decode(secret, allocated_len, key);
>> +    if (key_len != 36 && key_len != 52 &&
>> +        key_len != 68) {
>> +        pr_debug("Invalid DH-HMAC-CHAP key len %d\n",
>> +             key_len);
>> +        kfree_sensitive(key);
>> +        return ERR_PTR(-EINVAL);
>> +    }
>> +
>> +    /* The last four bytes is the CRC in little-endian format */
>> +    key_len -= 4;
>> +    /*
>> +     * The linux implementation doesn't do pre- and post-increments,
>> +     * so we have to do it manually.
>> +     */
>> +    crc = ~crc32(~0, key, key_len);
>> +
>> +    if (get_unaligned_le32(key + key_len) != crc) {
>> +        pr_debug("DH-HMAC-CHAP key crc mismatch (key %08x, crc %08x)\n",
>> +               get_unaligned_le32(key + key_len), crc);
>> +        kfree_sensitive(key);
>> +        return ERR_PTR(-EKEYREJECTED);
>> +    }
>> +    *out_len = key_len;
>> +    return key;
>> +}
>> +EXPORT_SYMBOL_GPL(nvme_auth_extract_secret);
>> +
>> +u8 *nvme_auth_transform_key(u8 *key, size_t key_len, u8 key_hash,
>> char *nqn)
>> +{
>> +    const char *hmac_name = nvme_auth_hmac_name(key_hash);
>> +    struct crypto_shash *key_tfm;
>> +    struct shash_desc *shash;
>> +    u8 *transformed_key;
>> +    int ret;
>> +
>> +    /* No key transformation required */
>> +    if (key_hash == 0)
>> +        return 0;
>> +
>> +    hmac_name = nvme_auth_hmac_name(key_hash);
>> +    if (!hmac_name) {
>> +        pr_warn("Invalid key hash id %d\n", key_hash);
>> +        return ERR_PTR(-EKEYREJECTED);
>> +    }
>
> newline here.
>
>> +    key_tfm = crypto_alloc_shash(hmac_name, 0, 0);
>> +    if (IS_ERR(key_tfm))
>> +        return (u8 *)key_tfm;
>> +
>> +    shash = kmalloc(sizeof(struct shash_desc) +
>> +            crypto_shash_descsize(key_tfm),
>> +            GFP_KERNEL);
>> +    if (!shash) {
>> +        crypto_free_shash(key_tfm);
>> +        return ERR_PTR(-ENOMEM);
>> +    }
>
> newline here.
>
>> +    transformed_key = kzalloc(crypto_shash_digestsize(key_tfm),
>> GFP_KERNEL);
>> +    if (!transformed_key) {
>> +        ret = -ENOMEM;
>> +        goto out_free_shash;
>> +    }
>> +
>> +    shash->tfm = key_tfm;
>> +    ret = crypto_shash_setkey(key_tfm, key, key_len);
>> +    if (ret < 0)
>> +        goto out_free_shash;
>> +    ret = crypto_shash_init(shash);
>> +    if (ret < 0)
>> +        goto out_free_shash;
>> +    ret = crypto_shash_update(shash, nqn, strlen(nqn));
>> +    if (ret < 0)
>> +        goto out_free_shash;
>> +    ret = crypto_shash_update(shash, "NVMe-over-Fabrics", 17);
>> +    if (ret < 0)
>> +        goto out_free_shash;
>> +    ret = crypto_shash_final(shash, transformed_key);
>> +out_free_shash:
>> +    kfree(shash);
>> +    crypto_free_shash(key_tfm);
>> +    if (ret < 0) {
>> +        kfree_sensitive(transformed_key);
>> +        return ERR_PTR(ret);
>> +    }
>
> Any reason why this is not a reverse cleanup with goto call-sites
> standard style?
>
None in particular.
Will be doing so.

>> +    return transformed_key;
>> +}
>> +EXPORT_SYMBOL_GPL(nvme_auth_transform_key);
>> +
>> +static int nvme_auth_hash_skey(int hmac_id, u8 *skey, size_t
>> skey_len, u8 *hkey)
>> +{
>> +    const char *digest_name;
>> +    struct crypto_shash *tfm;
>> +    int ret;
>> +
>> +    digest_name = nvme_auth_digest_name(hmac_id);
>> +    if (!digest_name) {
>> +        pr_debug("%s: failed to get digest for %d\n", __func__,
>> +             hmac_id);
>> +        return -EINVAL;
>> +    }
>> +    tfm = crypto_alloc_shash(digest_name, 0, 0);
>> +    if (IS_ERR(tfm))
>> +        return -ENOMEM;
>> +
>> +    ret = crypto_shash_tfm_digest(tfm, skey, skey_len, hkey);
>> +    if (ret < 0)
>> +        pr_debug("%s: Failed to hash digest len %zu\n", __func__,
>> +             skey_len);
>> +
>> +    crypto_free_shash(tfm);
>> +    return ret;
>> +}
>> +
>> +int nvme_auth_augmented_challenge(u8 hmac_id, u8 *skey, size_t skey_len,
>> +        u8 *challenge, u8 *aug, size_t hlen)
>> +{
>> +    struct crypto_shash *tfm;
>> +    struct shash_desc *desc;
>> +    u8 *hashed_key;
>> +    const char *hmac_name;
>> +    int ret;
>> +
>> +    hashed_key = kmalloc(hlen, GFP_KERNEL);
>> +    if (!hashed_key)
>> +        return -ENOMEM;
>> +
>> +    ret = nvme_auth_hash_skey(hmac_id, skey,
>> +                  skey_len, hashed_key);
>> +    if (ret < 0)
>> +        goto out_free_key;
>> +
>> +    hmac_name = nvme_auth_hmac_name(hmac_id);
>> +    if (!hmac_name) {
>> +        pr_warn("%s: invalid hash algoritm %d\n",
>> +            __func__, hmac_id);
>> +        ret = -EINVAL;
>> +        goto out_free_key;
>> +    }
>
> newline.
>
>> +    tfm = crypto_alloc_shash(hmac_name, 0, 0);
>> +    if (IS_ERR(tfm)) {
>> +        ret = PTR_ERR(tfm);
>> +        goto out_free_key;
>> +    }
>
> newline
>
>> +    desc = kmalloc(sizeof(struct shash_desc) +
>> crypto_shash_descsize(tfm),
>> +               GFP_KERNEL);
>> +    if (!desc) {
>> +        ret = -ENOMEM;
>> +        goto out_free_hash;
>> +    }
>> +    desc->tfm = tfm;
>> +
>> +    ret = crypto_shash_setkey(tfm, hashed_key, hlen);
>> +    if (ret)
>> +        goto out_free_desc;
>> +
>> +    ret = crypto_shash_init(desc);
>> +    if (ret)
>> +        goto out_free_desc;
>> +
>> +    ret = crypto_shash_update(desc, challenge, hlen);
>> +    if (ret)
>> +        goto out_free_desc;
>> +
>> +    ret = crypto_shash_final(desc, aug);
>> +out_free_desc:
>> +    kfree_sensitive(desc);
>> +out_free_hash:
>> +    crypto_free_shash(tfm);
>> +out_free_key:
>> +    kfree_sensitive(hashed_key);
>> +    return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(nvme_auth_augmented_challenge);
>> +
>> +int nvme_auth_gen_privkey(struct crypto_kpp *dh_tfm, int dh_gid)
>> +{
>> +    char *pkey;
>> +    int ret, pkey_len;
>> +
>> +    if (dh_gid == NVME_AUTH_DHCHAP_DHGROUP_2048 ||
>> +        dh_gid == NVME_AUTH_DHCHAP_DHGROUP_3072 ||
>> +        dh_gid == NVME_AUTH_DHCHAP_DHGROUP_4096 ||
>> +        dh_gid == NVME_AUTH_DHCHAP_DHGROUP_6144 ||
>> +        dh_gid == NVME_AUTH_DHCHAP_DHGROUP_8192) {
>> +        struct dh p = {0};
>> +        int bits = nvme_auth_dhgroup_pubkey_size(dh_gid) << 3;
>> +        int dh_secret_len = 64;
>> +        u8 *dh_secret = kzalloc(dh_secret_len, GFP_KERNEL);
>> +
>> +        if (!dh_secret)
>> +            return -ENOMEM;
>> +
>> +        /*
>> +         * NVMe base spec v2.0: The DH value shall be set to the value
>> +         * of g^x mod p, where 'x' is a random number selected by the
>> +         * host that shall be at least 256 bits long.
>> +         *
>> +         * We will be using a 512 bit random number as private key.
>> +         * This is large enough to provide adequate security, but
>> +         * small enough such that we can trivially conform to
>> +         * NIST SB800-56A section 5.6.1.1.4 if
>> +         * we guarantee that the random number is not either
>> +         * all 0xff or all 0x00. But that should be guaranteed
>> +         * by the in-kernel RNG anyway.
>> +         */
>> +        get_random_bytes(dh_secret, dh_secret_len);
>> +
>> +        ret = crypto_ffdhe_params(&p, bits);
>> +        if (ret) {
>> +            kfree_sensitive(dh_secret);
>> +            return ret;
>> +        }
>> +
>> +        p.key = dh_secret;
>> +        p.key_size = dh_secret_len;
>> +
>> +        pkey_len = crypto_dh_key_len(&p);
>> +        pkey = kmalloc(pkey_len, GFP_KERNEL);
>> +        if (!pkey) {
>> +            kfree_sensitive(dh_secret);
>> +            return -ENOMEM;
>> +        }
>> +
>> +        get_random_bytes(pkey, pkey_len);
>> +        ret = crypto_dh_encode_key(pkey, pkey_len, &p);
>> +        if (ret) {
>> +            pr_debug("failed to encode private key, error %d\n",
>> +                 ret);
>> +            kfree_sensitive(dh_secret);
>> +            goto out;
>> +        }
>> +    } else {
>> +        pr_warn("invalid dh group %d\n", dh_gid);
>> +        return -EINVAL;
>> +    }
>> +    ret = crypto_kpp_set_secret(dh_tfm, pkey, pkey_len);
>> +    if (ret)
>> +        pr_debug("failed to set private key, error %d\n", ret);
>> +out:
>> +    kfree_sensitive(pkey);
>
> pkey can be unset here.
>
Okay.

>> +    return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(nvme_auth_gen_privkey);
>> +
>> +int nvme_auth_gen_pubkey(struct crypto_kpp *dh_tfm,
>> +        u8 *host_key, size_t host_key_len)
>> +{
>> +    struct kpp_request *req;
>> +    struct crypto_wait wait;
>> +    struct scatterlist dst;
>> +    int ret;
>> +
>> +    req = kpp_request_alloc(dh_tfm, GFP_KERNEL);
>> +    if (!req)
>> +        return -ENOMEM;
>> +
>> +    crypto_init_wait(&wait);
>> +    kpp_request_set_input(req, NULL, 0);
>> +    sg_init_one(&dst, host_key, host_key_len);
>> +    kpp_request_set_output(req, &dst, host_key_len);
>> +    kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
>> +                 crypto_req_done, &wait);
>> +
>> +    ret = crypto_wait_req(crypto_kpp_generate_public_key(req), &wait);
>> +
>
> no need for this newline
>
>> +    kpp_request_free(req);
>> +    return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(nvme_auth_gen_pubkey);
>> +
>> +int nvme_auth_gen_shared_secret(struct crypto_kpp *dh_tfm,
>> +        u8 *ctrl_key, size_t ctrl_key_len,
>> +        u8 *sess_key, size_t sess_key_len)
>> +{
>> +    struct kpp_request *req;
>> +    struct crypto_wait wait;
>> +    struct scatterlist src, dst;
>> +    int ret;
>> +
>> +    req = kpp_request_alloc(dh_tfm, GFP_KERNEL);
>> +    if (!req)
>> +        return -ENOMEM;
>> +
>> +    crypto_init_wait(&wait);
>> +    sg_init_one(&src, ctrl_key, ctrl_key_len);
>> +    kpp_request_set_input(req, &src, ctrl_key_len);
>> +    sg_init_one(&dst, sess_key, sess_key_len);
>> +    kpp_request_set_output(req, &dst, sess_key_len);
>> +    kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
>> +                 crypto_req_done, &wait);
>> +
>> +    ret = crypto_wait_req(crypto_kpp_compute_shared_secret(req), &wait);
>> +
>> +    kpp_request_free(req);
>> +    return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(nvme_auth_gen_shared_secret);
>> +
>> +static int nvme_auth_send(struct nvme_ctrl *ctrl, int qid,
>> +        void *data, size_t tl)
>> +{
>> +    struct nvme_command cmd = {};
>> +    blk_mq_req_flags_t flags = qid == NVME_QID_ANY ?
>> +        0 : BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED;
>> +    struct request_queue *q = qid == NVME_QID_ANY ?
>> +        ctrl->fabrics_q : ctrl->connect_q;
>> +    int ret;
>> +
>> +    cmd.auth_send.opcode = nvme_fabrics_command;
>> +    cmd.auth_send.fctype = nvme_fabrics_type_auth_send;
>> +    cmd.auth_send.secp = NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER;
>> +    cmd.auth_send.spsp0 = 0x01;
>> +    cmd.auth_send.spsp1 = 0x01;
>> +    cmd.auth_send.tl = tl;
>> +
>> +    ret = __nvme_submit_sync_cmd(q, &cmd, NULL, data, tl, 0, qid,
>> +                     0, flags);
>> +    if (ret > 0)
>> +        dev_dbg(ctrl->device,
>> +            "%s: qid %d nvme status %d\n", __func__, qid, ret);
>> +    else if (ret < 0)
>> +        dev_dbg(ctrl->device,
>> +            "%s: qid %d error %d\n", __func__, qid, ret);
>> +    return ret;
>> +}
>> +
>> +static int nvme_auth_receive(struct nvme_ctrl *ctrl, int qid,
>> +        void *buf, size_t al)
>> +{
>> +    struct nvme_command cmd = {};
>> +    blk_mq_req_flags_t flags = qid == NVME_QID_ANY ?
>> +        0 : BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED;
>> +    struct request_queue *q = qid == NVME_QID_ANY ?
>> +        ctrl->fabrics_q : ctrl->connect_q;
>> +    int ret;
>> +
>> +    cmd.auth_receive.opcode = nvme_fabrics_command;
>> +    cmd.auth_receive.fctype = nvme_fabrics_type_auth_receive;
>> +    cmd.auth_receive.secp = NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER;
>> +    cmd.auth_receive.spsp0 = 0x01;
>> +    cmd.auth_receive.spsp1 = 0x01;
>> +    cmd.auth_receive.al = al;
>> +
>> +    ret = __nvme_submit_sync_cmd(q, &cmd, NULL, buf, al, 0, qid,
>> +                     0, flags);
>> +    if (ret > 0) {
>> +        dev_dbg(ctrl->device, "%s: qid %d nvme status %x\n",
>> +            __func__, qid, ret);
>> +        ret = -EIO;
>
> Why EIO?
>
See next comment.

>> +    }
>> +    if (ret < 0) {
>> +        dev_dbg(ctrl->device, "%s: qid %d error %d\n",
>> +            __func__, qid, ret);
>> +        return ret;
>> +    }
>
> Why did you choose to do these error conditionals differently for the
> send and receive functions?
>
Because we have _three_ kinds of errors here: error codes, NVMe status,
and authentication status.
And of course the authentication status is _not_ an NVMe status, so we
can't easily overload both into a single value.
As the authentication status will be set from the received data I chose
to fold all NVMe status onto -EIO, leaving the positive value free for
authentication status.

>> +
>> +    return 0;
>> +}
>> +
>> +static int nvme_auth_receive_validate(struct nvme_ctrl *ctrl, int qid,
>> +        struct nvmf_auth_dhchap_failure_data *data,
>> +        u16 transaction, u8 expected_msg)
>> +{
>> +    dev_dbg(ctrl->device, "%s: qid %d auth_type %d auth_id %x\n",
>> +        __func__, qid, data->auth_type, data->auth_id);
>> +
>> +    if (data->auth_type == NVME_AUTH_COMMON_MESSAGES &&
>> +        data->auth_id == NVME_AUTH_DHCHAP_MESSAGE_FAILURE1) {
>> +        return data->rescode_exp;
>> +    }
>> +    if (data->auth_type != NVME_AUTH_DHCHAP_MESSAGES ||
>> +        data->auth_id != expected_msg) {
>> +        dev_warn(ctrl->device,
>> +             "qid %d invalid message %02x/%02x\n",
>> +             qid, data->auth_type, data->auth_id);
>> +        return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_MESSAGE;
>> +    }
>> +    if (le16_to_cpu(data->t_id) != transaction) {
>> +        dev_warn(ctrl->device,
>> +             "qid %d invalid transaction ID %d\n",
>> +             qid, le16_to_cpu(data->t_id));
>> +        return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_MESSAGE;
>> +    }
>> +    return 0;
>> +}
>> +
>> +static int nvme_auth_set_dhchap_negotiate_data(struct nvme_ctrl *ctrl,
>> +        struct nvme_dhchap_queue_context *chap)
>> +{
>> +    struct nvmf_auth_dhchap_negotiate_data *data = chap->buf;
>> +    size_t size = sizeof(*data) + sizeof(union nvmf_auth_protocol);
>> +
>> +    if (chap->buf_size < size) {
>> +        chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
>
> Is this an internal error? not sure I understand setting of this status
>
As mentioned above, we now have three possible status codes to content
with. So yes, this is an internal error, expressed as authentication
error code.

The spec insists on using an authentication error here; it would be
possible to use a normal NVMe status, but that's not what the spec wants ...

>> +        return -EINVAL;
>> +    }
>> +    memset((u8 *)chap->buf, 0, size);
>> +    data->auth_type = NVME_AUTH_COMMON_MESSAGES;
>> +    data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE;
>> +    data->t_id = cpu_to_le16(chap->transaction);
>> +    data->sc_c = 0; /* No secure channel concatenation */
>> +    data->napd = 1;
>> +    data->auth_protocol[0].dhchap.authid = NVME_AUTH_DHCHAP_AUTH_ID;
>> +    data->auth_protocol[0].dhchap.halen = 3;
>> +    data->auth_protocol[0].dhchap.dhlen = 6;
>> +    data->auth_protocol[0].dhchap.idlist[0] = NVME_AUTH_DHCHAP_SHA256;
>> +    data->auth_protocol[0].dhchap.idlist[1] = NVME_AUTH_DHCHAP_SHA384;
>> +    data->auth_protocol[0].dhchap.idlist[2] = NVME_AUTH_DHCHAP_SHA512;
>> +    data->auth_protocol[0].dhchap.idlist[3] =
>> NVME_AUTH_DHCHAP_DHGROUP_NULL;
>> +    data->auth_protocol[0].dhchap.idlist[4] =
>> NVME_AUTH_DHCHAP_DHGROUP_2048;
>> +    data->auth_protocol[0].dhchap.idlist[5] =
>> NVME_AUTH_DHCHAP_DHGROUP_3072;
>> +    data->auth_protocol[0].dhchap.idlist[6] =
>> NVME_AUTH_DHCHAP_DHGROUP_4096;
>> +    data->auth_protocol[0].dhchap.idlist[7] =
>> NVME_AUTH_DHCHAP_DHGROUP_6144;
>> +    data->auth_protocol[0].dhchap.idlist[8] =
>> NVME_AUTH_DHCHAP_DHGROUP_8192;
>> +
>> +    return size;
>> +}
>> +
>> +static int nvme_auth_process_dhchap_challenge(struct nvme_ctrl *ctrl,
>> +        struct nvme_dhchap_queue_context *chap)
>> +{
>> +    struct nvmf_auth_dhchap_challenge_data *data = chap->buf;
>> +    size_t size = sizeof(*data) + data->hl + data->dhvlen;
>> +    const char *hmac_name;
>> +
>> +    if (chap->buf_size < size) {
>> +        chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
>> +        return NVME_SC_INVALID_FIELD;
>> +    }
>> +
>> +    hmac_name = nvme_auth_hmac_name(data->hashid);
>> +    if (!hmac_name) {
>> +        dev_warn(ctrl->device,
>> +             "qid %d: invalid HASH ID %d\n",
>> +             chap->qid, data->hashid);
>> +        chap->status = NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE;
>> +        return -EPROTO;
>> +    }
>> +    if (chap->hash_id == data->hashid && chap->shash_tfm &&
>> +        !strcmp(crypto_shash_alg_name(chap->shash_tfm), hmac_name) &&
>> +        crypto_shash_digestsize(chap->shash_tfm) == data->hl) {
>> +        dev_dbg(ctrl->device,
>> +            "qid %d: reuse existing hash %s\n",
>> +            chap->qid, hmac_name);
>> +        goto select_kpp;
>> +    }
>
> newline
>
>> +    if (chap->shash_tfm) {
>> +        crypto_free_shash(chap->shash_tfm);
>> +        chap->hash_id = 0;
>> +        chap->hash_len = 0;
>> +    }
>
> newline
>
>> +    chap->shash_tfm = crypto_alloc_shash(hmac_name, 0,
>> +                         CRYPTO_ALG_ALLOCATES_MEMORY);
>> +    if (IS_ERR(chap->shash_tfm)) {
>> +        dev_warn(ctrl->device,
>> +             "qid %d: failed to allocate hash %s, error %ld\n",
>> +             chap->qid, hmac_name, PTR_ERR(chap->shash_tfm));
>> +        chap->shash_tfm = NULL;
>> +        chap->status = NVME_AUTH_DHCHAP_FAILURE_FAILED;
>> +        return NVME_SC_AUTH_REQUIRED;
>> +    }
>
> newline
>
>> +    if (crypto_shash_digestsize(chap->shash_tfm) != data->hl) {
>> +        dev_warn(ctrl->device,
>> +             "qid %d: invalid hash length %d\n",
>> +             chap->qid, data->hl);
>> +        crypto_free_shash(chap->shash_tfm);
>> +        chap->shash_tfm = NULL;
>> +        chap->status = NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE;
>> +        return NVME_SC_AUTH_REQUIRED;
>> +    }
>
> newline
>
>> +    if (chap->hash_id != data->hashid) {
>> +        kfree(chap->host_response);
>
> kfree_sensitive? also why is is freed here? where was it allocated?
>
This is generated when calculating the host response in
nvme_auth_dhchap_host_response().

>> +        chap->host_response = NULL;
>> +    }
>> +    chap->hash_id = data->hashid;
>> +    chap->hash_len = data->hl;
>> +    dev_dbg(ctrl->device, "qid %d: selected hash %s\n",
>> +        chap->qid, hmac_name);
>> +
>> +    gid_name = nvme_auth_dhgroup_kpp(data->dhgid);
>> +    if (!gid_name) {
>> +        dev_warn(ctrl->device,
>> +             "qid %d: invalid DH group id %d\n",
>> +             chap->qid, data->dhgid);
>> +        chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
>> +        return -EPROTO;
>
> No need for all the previous frees?
> Maybe we can rework these such that we first do all the checks and then
> go and allocate stuff?
>

Hmm. Will have a look if that is feasible.

>> +    }
>> +
>> +    if (data->dhgid != NVME_AUTH_DHCHAP_DHGROUP_NULL) {
>> +        if (data->dhvlen == 0) {
>> +            dev_warn(ctrl->device,
>> +                 "qid %d: empty DH value\n",
>> +                 chap->qid);
>> +            chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
>> +            return -EPROTO;
>> +        }
>> +        chap->dh_tfm = crypto_alloc_kpp(gid_name, 0, 0);
>> +        if (IS_ERR(chap->dh_tfm)) {
>> +            int ret = PTR_ERR(chap->dh_tfm);
>> +
>> +            dev_warn(ctrl->device,
>> +                 "qid %d: failed to initialize %s\n",
>> +                 chap->qid, gid_name);
>> +            chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
>> +            chap->dh_tfm = NULL;
>> +            return ret;
>> +        }
>> +        chap->dhgroup_id = data->dhgid;
>> +    } else if (data->dhvlen != 0) {
>> +        dev_warn(ctrl->device,
>> +             "qid %d: invalid DH value for NULL DH\n",
>> +            chap->qid);
>> +        chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE;
>> +        return -EPROTO;
>> +    }
>> +    dev_dbg(ctrl->device, "qid %d: selected DH group %s\n",
>> +        chap->qid, gid_name);
>> +
>> +select_kpp:
>> +    chap->s1 = le32_to_cpu(data->seqnum);
>> +    memcpy(chap->c1, data->cval, chap->hash_len);
>> +
>> +    return 0;
>> +}
>> +
>> +static int nvme_auth_set_dhchap_reply_data(struct nvme_ctrl *ctrl,
>> +        struct nvme_dhchap_queue_context *chap)
>> +{
>> +    struct nvmf_auth_dhchap_reply_data *data = chap->buf;
>> +    size_t size = sizeof(*data);
>> +
>> +    size += 2 * chap->hash_len;
>> +    if (ctrl->opts->dhchap_bidi) {
>> +        get_random_bytes(chap->c2, chap->hash_len);
>> +        chap->s2 = nvme_dhchap_seqnum++;
>
> Any serialization needed on nvme_dhchap_seqnum?
>

Maybe; will be switching to atomic here.
Have been lazy ...

>> +    } else
>> +        memset(chap->c2, 0, chap->hash_len);
>> +
>> +
>> +    if (chap->buf_size < size) {
>> +        chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
>> +        return -EINVAL;
>> +    }
>> +    memset(chap->buf, 0, size);
>> +    data->auth_type = NVME_AUTH_DHCHAP_MESSAGES;
>> +    data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_REPLY;
>> +    data->t_id = cpu_to_le16(chap->transaction);
>> +    data->hl = chap->hash_len;
>> +    data->dhvlen = 0;
>> +    data->seqnum = cpu_to_le32(chap->s2);
>> +    memcpy(data->rval, chap->response, chap->hash_len);
>> +    if (ctrl->opts->dhchap_bidi) {
>
> Can we unite the "if (ctrl->opts->dhchap_bidi)"
> conditionals?
>

Sure.

[ .. ]
>> +int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid)
>> +{
>> +    struct nvme_dhchap_queue_context *chap;
>> +
>> +    if (!ctrl->dhchap_key || !ctrl->dhchap_key_len) {
>> +        dev_warn(ctrl->device, "qid %d: no key\n", qid);
>> +        return -ENOKEY;
>> +    }
>> +
>> +    mutex_lock(&ctrl->dhchap_auth_mutex);
>> +    /* Check if the context is already queued */
>> +    list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
>> +        if (chap->qid == qid) {
>> +            mutex_unlock(&ctrl->dhchap_auth_mutex);
>> +            queue_work(nvme_wq, &chap->auth_work);
>> +            return 0;
>> +        }
>> +    }
>> +    chap = kzalloc(sizeof(*chap), GFP_KERNEL);
>> +    if (!chap) {
>> +        mutex_unlock(&ctrl->dhchap_auth_mutex);
>> +        return -ENOMEM;
>> +    }
>> +    chap->qid = qid;
>> +    chap->ctrl = ctrl;
>> +
>> +    /*
>> +     * Allocate a large enough buffer for the entire negotiation:
>> +     * 4k should be enough to ffdhe8192.
>> +     */
>> +    chap->buf_size = 4096;
>> +    chap->buf = kzalloc(chap->buf_size, GFP_KERNEL);
>> +    if (!chap->buf) {
>> +        mutex_unlock(&ctrl->dhchap_auth_mutex);
>> +        kfree(chap);
>> +        return -ENOMEM;
>> +    }
>> +
>> +    INIT_WORK(&chap->auth_work, __nvme_auth_work);
>> +    list_add(&chap->entry, &ctrl->dhchap_auth_list);
>> +    mutex_unlock(&ctrl->dhchap_auth_mutex);
>> +    queue_work(nvme_wq, &chap->auth_work);
>
> Why is the auth in a work? e.g. it won't fail the connect?
>
For re-authentication.
Re-authentication should _not_ fail the connection if it stops in some
intermediate step, only once the the protocol ran to completion the
status is updated.
Meaning that we will have additional I/O ongoing while re-authentication
is in progress, so we can't stop all I/O here but rather need to shift
it onto a workqueue.

>> +    return 0;
>> +}
>> +EXPORT_SYMBOL_GPL(nvme_auth_negotiate);
>> +
>> +int nvme_auth_wait(struct nvme_ctrl *ctrl, int qid)
>> +{
>> +    struct nvme_dhchap_queue_context *chap;
>> +    int ret;
>> +
>> +    mutex_lock(&ctrl->dhchap_auth_mutex);
>> +    list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) {
>> +        if (chap->qid != qid)
>> +            continue;
>> +        mutex_unlock(&ctrl->dhchap_auth_mutex);
>> +        flush_work(&chap->auth_work);
>> +        ret = chap->error;
>> +        nvme_auth_reset(chap);
>> +        return ret;
>> +    }
>> +    mutex_unlock(&ctrl->dhchap_auth_mutex);
>> +    return -ENXIO;
>> +}
>> +EXPORT_SYMBOL_GPL(nvme_auth_wait);
>> +
>> +/* Assumes that the controller is in state RESETTING */
>> +static void nvme_dhchap_auth_work(struct work_struct *work)
>> +{
>> +    struct nvme_ctrl *ctrl =
>> +        container_of(work, struct nvme_ctrl, dhchap_auth_work);
>> +    int ret, q;
>> +
>> +    nvme_stop_queues(ctrl);
>> +    /* Authenticate admin queue first */
>> +    ret = nvme_auth_negotiate(ctrl, NVME_QID_ANY);
>> +    if (ret) {
>> +        dev_warn(ctrl->device,
>> +             "qid 0: error %d setting up authentication\n", ret);
>> +        goto out;
>> +    }
>> +    ret = nvme_auth_wait(ctrl, NVME_QID_ANY);
>> +    if (ret) {
>> +        dev_warn(ctrl->device,
>> +             "qid 0: authentication failed\n");
>> +        goto out;
>> +    }
>> +    dev_info(ctrl->device, "qid 0: authenticated\n");
>> +
>> +    for (q = 1; q < ctrl->queue_count; q++) {
>> +        ret = nvme_auth_negotiate(ctrl, q);
>> +        if (ret) {
>> +            dev_warn(ctrl->device,
>> +                 "qid %d: error %d setting up authentication\n",
>> +                 q, ret);
>> +            goto out;
>> +        }
>> +    }
>> +out:
>> +    /*
>> +     * Failure is a soft-state; credentials remain valid until
>> +     * the controller terminates the connection.
>> +     */
>> +    if (nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE))
>> +        nvme_start_queues(ctrl);
>> +}
>> +
>> +void nvme_auth_init_ctrl(struct nvme_ctrl *ctrl)
>> +{
>> +    INIT_LIST_HEAD(&ctrl->dhchap_auth_list);
>> +    INIT_WORK(&ctrl->dhchap_auth_work, nvme_dhchap_auth_work);
>> +    mutex_init(&ctrl->dhchap_auth_mutex);
>> +    nvme_auth_generate_key(ctrl);
>> +}
>> +EXPORT_SYMBOL_GPL(nvme_auth_init_ctrl);
>> +
>> +void nvme_auth_stop(struct nvme_ctrl *ctrl)
>> +{
>> +    struct nvme_dhchap_queue_context *chap = NULL, *tmp;
>> +
>> +    cancel_work_sync(&ctrl->dhchap_auth_work);
>> +    mutex_lock(&ctrl->dhchap_auth_mutex);
>> +    list_for_each_entry_safe(chap, tmp, &ctrl->dhchap_auth_list, entry)
>> +        cancel_work_sync(&chap->auth_work);
>> +    mutex_unlock(&ctrl->dhchap_auth_mutex);
>> +}
>> +EXPORT_SYMBOL_GPL(nvme_auth_stop);
>> +
>> +void nvme_auth_free(struct nvme_ctrl *ctrl)
>> +{
>> +    struct nvme_dhchap_queue_context *chap = NULL, *tmp;
>> +
>> +    mutex_lock(&ctrl->dhchap_auth_mutex);
>> +    list_for_each_entry_safe(chap, tmp, &ctrl->dhchap_auth_list,
>> entry) {
>> +        list_del_init(&chap->entry);
>> +        flush_work(&chap->auth_work);
>> +        __nvme_auth_free(chap);
>> +    }
>> +    mutex_unlock(&ctrl->dhchap_auth_mutex);
>> +    kfree(ctrl->dhchap_key);
>> +    ctrl->dhchap_key = NULL;
>> +    ctrl->dhchap_key_len = 0;
>> +}
>> +EXPORT_SYMBOL_GPL(nvme_auth_free);
>> diff --git a/drivers/nvme/host/auth.h b/drivers/nvme/host/auth.h
>> new file mode 100644
>> index 000000000000..cf1255f9db6d
>> --- /dev/null
>> +++ b/drivers/nvme/host/auth.h
>> @@ -0,0 +1,25 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (c) 2021 Hannes Reinecke, SUSE Software Solutions
>> + */
>> +
>> +#ifndef _NVME_AUTH_H
>> +#define _NVME_AUTH_H
>> +
>> +#include <crypto/kpp.h>
>> +
>> +const char *nvme_auth_dhgroup_name(int dhgroup_id);
>> +int nvme_auth_dhgroup_pubkey_size(int dhgroup_id);
>> +int nvme_auth_dhgroup_privkey_size(int dhgroup_id);
>> +const char *nvme_auth_dhgroup_kpp(int dhgroup_id);
>> +int nvme_auth_dhgroup_id(const char *dhgroup_name);
>> +
>> +const char *nvme_auth_hmac_name(int hmac_id);
>> +const char *nvme_auth_digest_name(int hmac_id);
>> +int nvme_auth_hmac_id(const char *hmac_name);
>> +
>> +unsigned char *nvme_auth_extract_secret(unsigned char *dhchap_secret,
>> +                    size_t *dhchap_key_len);
>> +u8 *nvme_auth_transform_key(u8 *key, size_t key_len, u8 key_hash,
>> char *nqn);
>> +
>> +#endif /* _NVME_AUTH_H */
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 7efb31b87f37..f669b054790b 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -24,6 +24,7 @@
>>     #include "nvme.h"
>>   #include "fabrics.h"
>> +#include "auth.h"
>>     #define CREATE_TRACE_POINTS
>>   #include "trace.h"
>> @@ -322,6 +323,7 @@ enum nvme_disposition {
>>       COMPLETE,
>>       RETRY,
>>       FAILOVER,
>> +    AUTHENTICATE,
>>   };
>>     static inline enum nvme_disposition nvme_decide_disposition(struct
>> request *req)
>> @@ -329,6 +331,9 @@ static inline enum nvme_disposition
>> nvme_decide_disposition(struct request *req)
>>       if (likely(nvme_req(req)->status == 0))
>>           return COMPLETE;
>>   +    if ((nvme_req(req)->status & 0x7ff) == NVME_SC_AUTH_REQUIRED)
>> +        return AUTHENTICATE;
>> +
>>       if (blk_noretry_request(req) ||
>>           (nvme_req(req)->status & NVME_SC_DNR) ||
>>           nvme_req(req)->retries >= nvme_max_retries)
>> @@ -361,11 +366,13 @@ static inline void nvme_end_req(struct request
>> *req)
>>     void nvme_complete_rq(struct request *req)
>>   {
>> +    struct nvme_ctrl *ctrl = nvme_req(req)->ctrl;
>> +
>>       trace_nvme_complete_rq(req);
>>       nvme_cleanup_cmd(req);
>>   -    if (nvme_req(req)->ctrl->kas)
>> -        nvme_req(req)->ctrl->comp_seen = true;
>> +    if (ctrl->kas)
>> +        ctrl->comp_seen = true;
>>         switch (nvme_decide_disposition(req)) {
>>       case COMPLETE:
>> @@ -377,6 +384,15 @@ void nvme_complete_rq(struct request *req)
>>       case FAILOVER:
>>           nvme_failover_req(req);
>>           return;
>> +    case AUTHENTICATE:
>> +#ifdef CONFIG_NVME_AUTH
>> +        if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
>> +            queue_work(nvme_wq, &ctrl->dhchap_auth_work);
>
> Why is the state change here and not in nvme_dhchap_auth_work?
>
Because switching to 'resetting' is an easy way to synchronize with the
admin queue.

>> +        nvme_retry_req(req);
>> +#else
>> +        nvme_end_req(req);
>> +#endif
>> +        return;
>>       }
>>   }
>>   EXPORT_SYMBOL_GPL(nvme_complete_rq);
>> @@ -707,7 +723,9 @@ bool __nvme_check_ready(struct nvme_ctrl *ctrl,
>> struct request *rq,
>>           switch (ctrl->state) {
>>           case NVME_CTRL_CONNECTING:
>>               if (blk_rq_is_passthrough(rq) &&
>> nvme_is_fabrics(req->cmd) &&
>> -                req->cmd->fabrics.fctype == nvme_fabrics_type_connect)
>> +                (req->cmd->fabrics.fctype ==
>> nvme_fabrics_type_connect ||
>> +                 req->cmd->fabrics.fctype ==
>> nvme_fabrics_type_auth_send ||
>> +                 req->cmd->fabrics.fctype ==
>> nvme_fabrics_type_auth_receive))
>
> What happens if the auth command comes before the connect (say in case
> of ctrl reset when auth was already queued but not yet executed?
>
See below.

>>                   return true;
>>               break;
>>           default:
>> @@ -3458,6 +3476,51 @@ static ssize_t
>> nvme_ctrl_fast_io_fail_tmo_store(struct device *dev,
>>   static DEVICE_ATTR(fast_io_fail_tmo, S_IRUGO | S_IWUSR,
>>       nvme_ctrl_fast_io_fail_tmo_show, nvme_ctrl_fast_io_fail_tmo_store);
>>   +#ifdef CONFIG_NVME_AUTH
>> +static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev,
>> +        struct device_attribute *attr, char *buf)
>> +{
>> +    struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
>> +    struct nvmf_ctrl_options *opts = ctrl->opts;
>> +
>> +    if (!opts->dhchap_secret)
>> +        return sysfs_emit(buf, "none\n");
>> +    return sysfs_emit(buf, "%s\n", opts->dhchap_secret);
>
> Should we actually show this? don't know enough how much the secret
> should be kept a secret...
>
I found it logical, as we need the 'store' functionality to trigger
re-authentication.
But sure, we can make this a write-only attribute.

>> +}
>> +
>> +static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev,
>> +        struct device_attribute *attr, const char *buf, size_t count)
>> +{
>> +    struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
>> +    struct nvmf_ctrl_options *opts = ctrl->opts;
>> +    char *dhchap_secret;
>> +
>> +    if (!ctrl->opts->dhchap_secret)
>> +        return -EINVAL;
>> +    if (count < 7)
>> +        return -EINVAL;
>> +    if (memcmp(buf, "DHHC-1:", 7))
>> +        return -EINVAL;
>> +
>> +    dhchap_secret = kzalloc(count + 1, GFP_KERNEL);
>> +    if (!dhchap_secret)
>> +        return -ENOMEM;
>> +    memcpy(dhchap_secret, buf, count);
>> +    if (strcmp(dhchap_secret, opts->dhchap_secret)) {
>> +        kfree(opts->dhchap_secret);
>> +        opts->dhchap_secret = dhchap_secret;
>> +        /* Key has changed; reset authentication data */
>> +        nvme_auth_free(ctrl);
>> +        nvme_auth_generate_key(ctrl);
>> +    }
>
> Nice, worth a comment "/* Re-authentication with new secret */"
>
Right, will do.

Thanks for the review!

Cheers,

Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

2021-09-14 07:07:00

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH 07/12] nvme: Implement In-Band authentication

>>> @@ -361,11 +366,13 @@ static inline void nvme_end_req(struct request
>>> *req)
>>>     void nvme_complete_rq(struct request *req)
>>>   {
>>> +    struct nvme_ctrl *ctrl = nvme_req(req)->ctrl;
>>> +
>>>       trace_nvme_complete_rq(req);
>>>       nvme_cleanup_cmd(req);
>>>   -    if (nvme_req(req)->ctrl->kas)
>>> -        nvme_req(req)->ctrl->comp_seen = true;
>>> +    if (ctrl->kas)
>>> +        ctrl->comp_seen = true;
>>>         switch (nvme_decide_disposition(req)) {
>>>       case COMPLETE:
>>> @@ -377,6 +384,15 @@ void nvme_complete_rq(struct request *req)
>>>       case FAILOVER:
>>>           nvme_failover_req(req);
>>>           return;
>>> +    case AUTHENTICATE:
>>> +#ifdef CONFIG_NVME_AUTH
>>> +        if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
>>> +            queue_work(nvme_wq, &ctrl->dhchap_auth_work);
>>
>> Why is the state change here and not in nvme_dhchap_auth_work?
>>
> Because switching to 'resetting' is an easy way to synchronize with the
> admin queue.

Maybe fold this into nvme_authenticate_ctrl? in case someone adds/moves
this in the future and forgets the ctrl state serialization?

2021-09-14 07:20:26

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 07/12] nvme: Implement In-Band authentication

On 9/14/21 9:06 AM, Sagi Grimberg wrote:
>>>> @@ -361,11 +366,13 @@ static inline void nvme_end_req(struct request
>>>> *req)
>>>>      void nvme_complete_rq(struct request *req)
>>>>    {
>>>> +    struct nvme_ctrl *ctrl = nvme_req(req)->ctrl;
>>>> +
>>>>        trace_nvme_complete_rq(req);
>>>>        nvme_cleanup_cmd(req);
>>>>    -    if (nvme_req(req)->ctrl->kas)
>>>> -        nvme_req(req)->ctrl->comp_seen = true;
>>>> +    if (ctrl->kas)
>>>> +        ctrl->comp_seen = true;
>>>>          switch (nvme_decide_disposition(req)) {
>>>>        case COMPLETE:
>>>> @@ -377,6 +384,15 @@ void nvme_complete_rq(struct request *req)
>>>>        case FAILOVER:
>>>>            nvme_failover_req(req);
>>>>            return;
>>>> +    case AUTHENTICATE:
>>>> +#ifdef CONFIG_NVME_AUTH
>>>> +        if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
>>>> +            queue_work(nvme_wq, &ctrl->dhchap_auth_work);
>>>
>>> Why is the state change here and not in nvme_dhchap_auth_work?
>>>
>> Because switching to 'resetting' is an easy way to synchronize with the
>> admin queue.
>
> Maybe fold this into nvme_authenticate_ctrl? in case someone adds/moves
> this in the future and forgets the ctrl state serialization?

Yeah; not a bad idea. Will be looking into it.

Cheers,

Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

2021-09-16 17:05:36

by Chaitanya Kulkarni

[permalink] [raw]
Subject: Re: [PATCH 01/12] crypto: add crypto_has_shash()

On 9/9/21 11:43 PM, Hannes Reinecke wrote:
> Add helper function to determine if a given synchronous hash is supported.
>
> Signed-off-by: Hannes Reinecke <[email protected]>
>

Looks good.

Reviewed-by: Chaitanya Kulkarni <[email protected]>


2021-09-16 17:06:21

by Chaitanya Kulkarni

[permalink] [raw]
Subject: Re: [PATCH 02/12] crypto: add crypto_has_kpp()

On 9/9/21 11:43 PM, Hannes Reinecke wrote:
> Add helper function to determine if a given key-agreement protocol primitive is supported.
>
> Signed-off-by: Hannes Reinecke <[email protected]>
> ---
>

Looks good.

Reviewed-by: Chaitanya Kulkarni <[email protected]>


2021-09-16 17:09:12

by Chaitanya Kulkarni

[permalink] [raw]
Subject: Re: [PATCH 06/12] nvme-fabrics: decode 'authentication required' connect error

On 9/9/21 11:43 PM, Hannes Reinecke wrote:
> The 'connect' command might fail with NVME_SC_AUTH_REQUIRED, so we
> should be decoding this error, too.
>
> Signed-off-by: Hannes Reinecke <[email protected]>

Looks good.

Reviewed-by: Chaitanya Kulkarni <[email protected]>


2021-09-16 17:10:25

by Chaitanya Kulkarni

[permalink] [raw]
Subject: Re: [PATCH 05/12] nvme: add definitions for NVMe In-Band authentication

On 9/9/21 11:43 PM, Hannes Reinecke wrote:
> Signed-off-by: Hannes Reinecke <[email protected]>
> ---
> include/linux/nvme.h | 186 ++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 185 insertions(+), 1 deletion(-)
>

Probably worth mentioning a TP name here so we can refer later,
instead of empty commit message ?


2021-09-17 12:26:58

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 05/12] nvme: add definitions for NVMe In-Band authentication

On 9/16/21 7:04 PM, Chaitanya Kulkarni wrote:
> On 9/9/21 11:43 PM, Hannes Reinecke wrote:
>> Signed-off-by: Hannes Reinecke <[email protected]>
>> ---
>> include/linux/nvme.h | 186 ++++++++++++++++++++++++++++++++++++++++++-
>> 1 file changed, 185 insertions(+), 1 deletion(-)
>>
>
> Probably worth mentioning a TP name here so we can refer later,
> instead of empty commit message ?
>
Had been thinking about it, but then decided against it. Once the TPAR
is folded into the main spec it's getting really hard to figure out
exactly what individual TPARs were referring to, so I prefer to stick
with 'In-Band authentication' instead of the TPAR number.
But I can add that to the commit message, sure.

Cheers,

Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

2021-09-19 13:01:11

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCHv3 00/12] nvme: In-band authentication support

>>> Hi all,
>>>
>>> recent updates to the NVMe spec have added definitions for in-band
>>> authentication, and seeing that it provides some real benefit
>>> especially for NVMe-TCP here's an attempt to implement it.
>>>
>>> Tricky bit here is that the specification orients itself on TLS 1.3,
>>> but supports only the FFDHE groups. Which of course the kernel doesn't
>>> support. I've been able to come up with a patch for this, but as this
>>> is my first attempt to fix anything in the crypto area I would invite
>>> people more familiar with these matters to have a look.
>>>
>>> Also note that this is just for in-band authentication. Secure
>>> concatenation (ie starting TLS with the negotiated parameters) is not
>>> implemented; one would need to update the kernel TLS implementation
>>> for this, which at this time is beyond scope.
>>>
>>> As usual, comments and reviews are welcome.
>>
>> Still no nvme-cli nor nvmetcli :(
>
> Just send it (for libnvme and nvme-cli). Patch for nvmetcli to follow.

Hey Hannes,

I think that this series is getting into close-to-inclustion shape.
Please in your next respin:
1. Make sure to send nvme-cli and nvmetcli with the series
2. Collect Review tags

Thanks!

2021-09-26 14:32:02

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH 10/12] nvmet: Implement basic In-Band Authentication


> +int nvmet_setup_auth(struct nvmet_ctrl *ctrl)
> +{
> + int ret = 0;
> + struct nvmet_host_link *p;
> + struct nvmet_host *host = NULL;
> + const char *hash_name;
> +
> + down_read(&nvmet_config_sem);
> + if (ctrl->subsys->type == NVME_NQN_DISC)
> + goto out_unlock;

+ if (ctrl->subsys->allow_any_host)
+ goto out_unlock;

2021-09-26 22:05:58

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH 07/12] nvme: Implement In-Band authentication


> +/* Assumes that the controller is in state RESETTING */
> +static void nvme_dhchap_auth_work(struct work_struct *work)
> +{
> + struct nvme_ctrl *ctrl =
> + container_of(work, struct nvme_ctrl, dhchap_auth_work);
> + int ret, q;
> +
> + nvme_stop_queues(ctrl);

blk_mq_quiesce_queue(ctrl->admin_q);

> + /* Authenticate admin queue first */
> + ret = nvme_auth_negotiate(ctrl, NVME_QID_ANY);
> + if (ret) {
> + dev_warn(ctrl->device,
> + "qid 0: error %d setting up authentication\n", ret);
> + goto out;
> + }
> + ret = nvme_auth_wait(ctrl, NVME_QID_ANY);
> + if (ret) {
> + dev_warn(ctrl->device,
> + "qid 0: authentication failed\n");
> + goto out;
> + }
> + dev_info(ctrl->device, "qid 0: authenticated\n");
> +
> + for (q = 1; q < ctrl->queue_count; q++) {
> + ret = nvme_auth_negotiate(ctrl, q);
> + if (ret) {
> + dev_warn(ctrl->device,
> + "qid %d: error %d setting up authentication\n",
> + q, ret);
> + goto out;
> + }
> + }
> +out:
> + /*
> + * Failure is a soft-state; credentials remain valid until
> + * the controller terminates the connection.
> + */
> + if (nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE))
> + nvme_start_queues(ctrl);
blk_mq_unquiesce_queue(ctrl->admin_q);

> +}

2021-09-26 22:52:39

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH 10/12] nvmet: Implement basic In-Band Authentication


> +void nvmet_execute_auth_send(struct nvmet_req *req)
> +{
> + struct nvmet_ctrl *ctrl = req->sq->ctrl;
> + struct nvmf_auth_dhchap_success2_data *data;
> + void *d;
> + u32 tl;
> + u16 status = 0;
> +
> + if (req->cmd->auth_send.secp != NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER) {
> + status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
> + req->error_loc =
> + offsetof(struct nvmf_auth_send_command, secp);
> + goto done;
> + }
> + if (req->cmd->auth_send.spsp0 != 0x01) {
> + status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
> + req->error_loc =
> + offsetof(struct nvmf_auth_send_command, spsp0);
> + goto done;
> + }
> + if (req->cmd->auth_send.spsp1 != 0x01) {
> + status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
> + req->error_loc =
> + offsetof(struct nvmf_auth_send_command, spsp1);
> + goto done;
> + }
> + tl = le32_to_cpu(req->cmd->auth_send.tl);
> + if (!tl) {
> + status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
> + req->error_loc =
> + offsetof(struct nvmf_auth_send_command, tl);
> + goto done;
> + }
> + if (!nvmet_check_transfer_len(req, tl)) {
> + pr_debug("%s: transfer length mismatch (%u)\n", __func__, tl);
> + return;
> + }
> +
> + d = kmalloc(tl, GFP_KERNEL);
> + if (!d) {
> + status = NVME_SC_INTERNAL;
> + goto done;
> + }
> +
> + status = nvmet_copy_from_sgl(req, 0, d, tl);
> + if (status) {
> + kfree(d);
> + goto done;
> + }
> +
> + data = d;
> + pr_debug("%s: ctrl %d qid %d type %d id %d step %x\n", __func__,
> + ctrl->cntlid, req->sq->qid, data->auth_type, data->auth_id,
> + req->sq->dhchap_step);
> + if (data->auth_type != NVME_AUTH_COMMON_MESSAGES &&
> + data->auth_type != NVME_AUTH_DHCHAP_MESSAGES)
> + goto done_failure1;
> + if (data->auth_type == NVME_AUTH_COMMON_MESSAGES) {
> + if (data->auth_id == NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE) {
> + /* Restart negotiation */
> + pr_debug("%s: ctrl %d qid %d reset negotiation\n", __func__,
> + ctrl->cntlid, req->sq->qid);

This is the point where you need to reset also auth config as this may
have changed and the host will not create a new controller but rather
re-authenticate on the existing controller.

i.e.

+ if (!req->sq->qid) {
+ nvmet_destroy_auth(ctrl);
+ if (nvmet_setup_auth(ctrl) < 0) {
+ pr_err("Failed to setup
re-authentication\n");
+ goto done_failure1;
+ }
+ }



> + req->sq->dhchap_step = NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE;
> + } else if (data->auth_id != req->sq->dhchap_step)
> + goto done_failure1;
> + /* Validate negotiation parameters */
> + status = nvmet_auth_negotiate(req, d);/

2021-09-26 22:54:03

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH 07/12] nvme: Implement In-Band authentication


> +/* Assumes that the controller is in state RESETTING */
> +static void nvme_dhchap_auth_work(struct work_struct *work)
> +{
> + struct nvme_ctrl *ctrl =
> + container_of(work, struct nvme_ctrl, dhchap_auth_work);
> + int ret, q;
> +

Here I would print a single:
dev_info(ctrl->device, "re-authenticating controller");

This is instead of all the queue re-authentication prints that
should be dev_dbg.

Let's avoid doing the per-queue print...

2021-09-27 05:49:00

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 07/12] nvme: Implement In-Band authentication

On 9/27/21 12:53 AM, Sagi Grimberg wrote:
>
>> +/* Assumes that the controller is in state RESETTING */
>> +static void nvme_dhchap_auth_work(struct work_struct *work)
>> +{
>> +    struct nvme_ctrl *ctrl =
>> +        container_of(work, struct nvme_ctrl, dhchap_auth_work);
>> +    int ret, q;
>> +
>
> Here I would print a single:
>     dev_info(ctrl->device, "re-authenticating controller");
>
> This is instead of all the queue re-authentication prints that
> should be dev_dbg.
>
> Let's avoid doing the per-queue print...

Hmm. Actually the spec allows to use different keys per queue, even
though our implementation doesn't. And fmds has struggled to come up
with a sane usecase for that.
But yes, okay, will be updating it.

Cheers,

Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

2021-09-27 06:40:44

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 10/12] nvmet: Implement basic In-Band Authentication

On 9/27/21 12:51 AM, Sagi Grimberg wrote:
>
>> +void nvmet_execute_auth_send(struct nvmet_req *req)
>> +{
>> +    struct nvmet_ctrl *ctrl = req->sq->ctrl;
>> +    struct nvmf_auth_dhchap_success2_data *data;
>> +    void *d;
>> +    u32 tl;
>> +    u16 status = 0;
>> +
>> +    if (req->cmd->auth_send.secp !=
>> NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER) {
>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>> +        req->error_loc =
>> +            offsetof(struct nvmf_auth_send_command, secp);
>> +        goto done;
>> +    }
>> +    if (req->cmd->auth_send.spsp0 != 0x01) {
>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>> +        req->error_loc =
>> +            offsetof(struct nvmf_auth_send_command, spsp0);
>> +        goto done;
>> +    }
>> +    if (req->cmd->auth_send.spsp1 != 0x01) {
>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>> +        req->error_loc =
>> +            offsetof(struct nvmf_auth_send_command, spsp1);
>> +        goto done;
>> +    }
>> +    tl = le32_to_cpu(req->cmd->auth_send.tl);
>> +    if (!tl) {
>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>> +        req->error_loc =
>> +            offsetof(struct nvmf_auth_send_command, tl);
>> +        goto done;
>> +    }
>> +    if (!nvmet_check_transfer_len(req, tl)) {
>> +        pr_debug("%s: transfer length mismatch (%u)\n", __func__, tl);
>> +        return;
>> +    }
>> +
>> +    d = kmalloc(tl, GFP_KERNEL);
>> +    if (!d) {
>> +        status = NVME_SC_INTERNAL;
>> +        goto done;
>> +    }
>> +
>> +    status = nvmet_copy_from_sgl(req, 0, d, tl);
>> +    if (status) {
>> +        kfree(d);
>> +        goto done;
>> +    }
>> +
>> +    data = d;
>> +    pr_debug("%s: ctrl %d qid %d type %d id %d step %x\n", __func__,
>> +         ctrl->cntlid, req->sq->qid, data->auth_type, data->auth_id,
>> +         req->sq->dhchap_step);
>> +    if (data->auth_type != NVME_AUTH_COMMON_MESSAGES &&
>> +        data->auth_type != NVME_AUTH_DHCHAP_MESSAGES)
>> +        goto done_failure1;
>> +    if (data->auth_type == NVME_AUTH_COMMON_MESSAGES) {
>> +        if (data->auth_id == NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE) {
>> +            /* Restart negotiation */
>> +            pr_debug("%s: ctrl %d qid %d reset negotiation\n", __func__,
>> +                 ctrl->cntlid, req->sq->qid);
>
> This is the point where you need to reset also auth config as this may
> have changed and the host will not create a new controller but rather
> re-authenticate on the existing controller.
>
> i.e.
>
> +                       if (!req->sq->qid) {
> +                               nvmet_destroy_auth(ctrl);
> +                               if (nvmet_setup_auth(ctrl) < 0) {
> +                                       pr_err("Failed to setup
> re-authentication\n");
> +                                       goto done_failure1;
> +                               }
> +                       }
>
>
>

Not sure. We have two paths how re-authentication can be triggered.
The one is from the host, which sends a 'negotiate' command to the
controller (ie this path). Then nothing on the controller has changed,
and we just need to ensure that we restart negotiation.
IE we should _not_ reset the authentication (as that would also remove
the controller keys, which haven't changed). We should just ensure that
all ephemeral data is regenerated. But that should be handled in-line,
and I _think_ I have covered all of that.
The other path to trigger re-authentication is when changing values on
the controller via configfs. Then sure we need to reset the controller
data, and trigger reauthentication.
And there I do agree, that path isn't fully implemented / tested.
But should be started whenever the configfs values change.

Cheers,

Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

2021-09-27 07:18:07

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 10/12] nvmet: Implement basic In-Band Authentication

On 9/27/21 8:40 AM, Hannes Reinecke wrote:
> On 9/27/21 12:51 AM, Sagi Grimberg wrote:
>>
>>> +void nvmet_execute_auth_send(struct nvmet_req *req)
>>> +{
>>> +    struct nvmet_ctrl *ctrl = req->sq->ctrl;
>>> +    struct nvmf_auth_dhchap_success2_data *data;
>>> +    void *d;
>>> +    u32 tl;
>>> +    u16 status = 0;
>>> +
>>> +    if (req->cmd->auth_send.secp !=
>>> NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER) {
>>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>>> +        req->error_loc =
>>> +            offsetof(struct nvmf_auth_send_command, secp);
>>> +        goto done;
>>> +    }
>>> +    if (req->cmd->auth_send.spsp0 != 0x01) {
>>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>>> +        req->error_loc =
>>> +            offsetof(struct nvmf_auth_send_command, spsp0);
>>> +        goto done;
>>> +    }
>>> +    if (req->cmd->auth_send.spsp1 != 0x01) {
>>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>>> +        req->error_loc =
>>> +            offsetof(struct nvmf_auth_send_command, spsp1);
>>> +        goto done;
>>> +    }
>>> +    tl = le32_to_cpu(req->cmd->auth_send.tl);
>>> +    if (!tl) {
>>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>>> +        req->error_loc =
>>> +            offsetof(struct nvmf_auth_send_command, tl);
>>> +        goto done;
>>> +    }
>>> +    if (!nvmet_check_transfer_len(req, tl)) {
>>> +        pr_debug("%s: transfer length mismatch (%u)\n", __func__, tl);
>>> +        return;
>>> +    }
>>> +
>>> +    d = kmalloc(tl, GFP_KERNEL);
>>> +    if (!d) {
>>> +        status = NVME_SC_INTERNAL;
>>> +        goto done;
>>> +    }
>>> +
>>> +    status = nvmet_copy_from_sgl(req, 0, d, tl);
>>> +    if (status) {
>>> +        kfree(d);
>>> +        goto done;
>>> +    }
>>> +
>>> +    data = d;
>>> +    pr_debug("%s: ctrl %d qid %d type %d id %d step %x\n", __func__,
>>> +         ctrl->cntlid, req->sq->qid, data->auth_type, data->auth_id,
>>> +         req->sq->dhchap_step);
>>> +    if (data->auth_type != NVME_AUTH_COMMON_MESSAGES &&
>>> +        data->auth_type != NVME_AUTH_DHCHAP_MESSAGES)
>>> +        goto done_failure1;
>>> +    if (data->auth_type == NVME_AUTH_COMMON_MESSAGES) {
>>> +        if (data->auth_id == NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE) {
>>> +            /* Restart negotiation */
>>> +            pr_debug("%s: ctrl %d qid %d reset negotiation\n",
>>> __func__,
>>> +                 ctrl->cntlid, req->sq->qid);
>>
>> This is the point where you need to reset also auth config as this may
>> have changed and the host will not create a new controller but rather
>> re-authenticate on the existing controller.
>>
>> i.e.
>>
>> +                       if (!req->sq->qid) {
>> +                               nvmet_destroy_auth(ctrl);
>> +                               if (nvmet_setup_auth(ctrl) < 0) {
>> +                                       pr_err("Failed to setup
>> re-authentication\n");
>> +                                       goto done_failure1;
>> +                               }
>> +                       }
>>
>>
>>
>
> Not sure. We have two paths how re-authentication can be triggered.
> The one is from the host, which sends a 'negotiate' command to the
> controller (ie this path).  Then nothing on the controller has changed,
> and we just need to ensure that we restart negotiation.
> IE we should _not_ reset the authentication (as that would also remove
> the controller keys, which haven't changed). We should just ensure that
> all ephemeral data is regenerated. But that should be handled in-line,
> and I _think_ I have covered all of that.
> The other path to trigger re-authentication is when changing values on
> the controller via configfs. Then sure we need to reset the controller
> data, and trigger reauthentication.
> And there I do agree, that path isn't fully implemented / tested.
> But should be started whenever the configfs values change.
>
Actually, having re-read the spec I'm not sure if the second path is
correct.
As per spec only the _host_ can trigger re-authentication. There is no
provision for the controller to trigger re-authentication, and given
that re-auth is a soft-state anyway (ie the current authentication stays
valid until re-auth enters a final state) I _think_ we should be good
with the current implementation, where we can change the controller keys
via configfs, but they will only become active once the host triggers
re-authentication.

And indeed, that's the only way how it could work, otherwise it'll be
tricky to change keys in a running connection.
If we were to force renegotiation when changing controller keys we would
immediately fail the connection, as we cannot guarantee that controller
_and_ host keys are changed at the same time.

Cheers,

Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

2021-09-27 07:27:03

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 07/12] nvme: Implement In-Band authentication

On 9/27/21 12:04 AM, Sagi Grimberg wrote:
>
>> +/* Assumes that the controller is in state RESETTING */
>> +static void nvme_dhchap_auth_work(struct work_struct *work)
>> +{
>> +    struct nvme_ctrl *ctrl =
>> +        container_of(work, struct nvme_ctrl, dhchap_auth_work);
>> +    int ret, q;
>> +
>> +    nvme_stop_queues(ctrl);
>
>     blk_mq_quiesce_queue(ctrl->admin_q);
>
>> +    /* Authenticate admin queue first */
>> +    ret = nvme_auth_negotiate(ctrl, NVME_QID_ANY);
>> +    if (ret) {
>> +        dev_warn(ctrl->device,
>> +             "qid 0: error %d setting up authentication\n", ret);
>> +        goto out;
>> +    }
>> +    ret = nvme_auth_wait(ctrl, NVME_QID_ANY);
>> +    if (ret) {
>> +        dev_warn(ctrl->device,
>> +             "qid 0: authentication failed\n");
>> +        goto out;
>> +    }
>> +    dev_info(ctrl->device, "qid 0: authenticated\n");
>> +
>> +    for (q = 1; q < ctrl->queue_count; q++) {
>> +        ret = nvme_auth_negotiate(ctrl, q);
>> +        if (ret) {
>> +            dev_warn(ctrl->device,
>> +                 "qid %d: error %d setting up authentication\n",
>> +                 q, ret);
>> +            goto out;
>> +        }
>> +    }
>> +out:
>> +    /*
>> +     * Failure is a soft-state; credentials remain valid until
>> +     * the controller terminates the connection.
>> +     */
>> +    if (nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE))
>> +        nvme_start_queues(ctrl);
>         blk_mq_unquiesce_queue(ctrl->admin_q);
>
>> +}

Actually, after recent discussions on the fmds group there shouldn't be
a requirement to stop the queues, so I'll be dropping the stop/start
queue things.
(And the change in controller state, too, as it isn't required, either).

Cheers,

Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

2021-09-27 07:53:51

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH 07/12] nvme: Implement In-Band authentication


> Actually, after recent discussions on the fmds group there shouldn't be
> a requirement to stop the queues, so I'll be dropping the stop/start
> queue things.
> (And the change in controller state, too, as it isn't required, either).

Hmm, ok.

2021-09-27 07:53:56

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH 07/12] nvme: Implement In-Band authentication



On 9/27/21 8:48 AM, Hannes Reinecke wrote:
> On 9/27/21 12:53 AM, Sagi Grimberg wrote:
>>
>>> +/* Assumes that the controller is in state RESETTING */
>>> +static void nvme_dhchap_auth_work(struct work_struct *work)
>>> +{
>>> +    struct nvme_ctrl *ctrl =
>>> +        container_of(work, struct nvme_ctrl, dhchap_auth_work);
>>> +    int ret, q;
>>> +
>>
>> Here I would print a single:
>>      dev_info(ctrl->device, "re-authenticating controller");
>>
>> This is instead of all the queue re-authentication prints that
>> should be dev_dbg.
>>
>> Let's avoid doing the per-queue print...
>
> Hmm. Actually the spec allows to use different keys per queue, even
> though our implementation doesn't. And fmds has struggled to come up
> with a sane usecase for that.

We don't need to support it, but regardless we should not
info print per-queue.

> But yes, okay, will be updating it.

Great...

2021-09-27 07:59:00

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH 10/12] nvmet: Implement basic In-Band Authentication



On 9/27/21 10:17 AM, Hannes Reinecke wrote:
> On 9/27/21 8:40 AM, Hannes Reinecke wrote:
>> On 9/27/21 12:51 AM, Sagi Grimberg wrote:
>>>
>>>> +void nvmet_execute_auth_send(struct nvmet_req *req)
>>>> +{
>>>> +    struct nvmet_ctrl *ctrl = req->sq->ctrl;
>>>> +    struct nvmf_auth_dhchap_success2_data *data;
>>>> +    void *d;
>>>> +    u32 tl;
>>>> +    u16 status = 0;
>>>> +
>>>> +    if (req->cmd->auth_send.secp !=
>>>> NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER) {
>>>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>>>> +        req->error_loc =
>>>> +            offsetof(struct nvmf_auth_send_command, secp);
>>>> +        goto done;
>>>> +    }
>>>> +    if (req->cmd->auth_send.spsp0 != 0x01) {
>>>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>>>> +        req->error_loc =
>>>> +            offsetof(struct nvmf_auth_send_command, spsp0);
>>>> +        goto done;
>>>> +    }
>>>> +    if (req->cmd->auth_send.spsp1 != 0x01) {
>>>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>>>> +        req->error_loc =
>>>> +            offsetof(struct nvmf_auth_send_command, spsp1);
>>>> +        goto done;
>>>> +    }
>>>> +    tl = le32_to_cpu(req->cmd->auth_send.tl);
>>>> +    if (!tl) {
>>>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>>>> +        req->error_loc =
>>>> +            offsetof(struct nvmf_auth_send_command, tl);
>>>> +        goto done;
>>>> +    }
>>>> +    if (!nvmet_check_transfer_len(req, tl)) {
>>>> +        pr_debug("%s: transfer length mismatch (%u)\n", __func__, tl);
>>>> +        return;
>>>> +    }
>>>> +
>>>> +    d = kmalloc(tl, GFP_KERNEL);
>>>> +    if (!d) {
>>>> +        status = NVME_SC_INTERNAL;
>>>> +        goto done;
>>>> +    }
>>>> +
>>>> +    status = nvmet_copy_from_sgl(req, 0, d, tl);
>>>> +    if (status) {
>>>> +        kfree(d);
>>>> +        goto done;
>>>> +    }
>>>> +
>>>> +    data = d;
>>>> +    pr_debug("%s: ctrl %d qid %d type %d id %d step %x\n", __func__,
>>>> +         ctrl->cntlid, req->sq->qid, data->auth_type, data->auth_id,
>>>> +         req->sq->dhchap_step);
>>>> +    if (data->auth_type != NVME_AUTH_COMMON_MESSAGES &&
>>>> +        data->auth_type != NVME_AUTH_DHCHAP_MESSAGES)
>>>> +        goto done_failure1;
>>>> +    if (data->auth_type == NVME_AUTH_COMMON_MESSAGES) {
>>>> +        if (data->auth_id == NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE) {
>>>> +            /* Restart negotiation */
>>>> +            pr_debug("%s: ctrl %d qid %d reset negotiation\n",
>>>> __func__,
>>>> +                 ctrl->cntlid, req->sq->qid);
>>>
>>> This is the point where you need to reset also auth config as this may
>>> have changed and the host will not create a new controller but rather
>>> re-authenticate on the existing controller.
>>>
>>> i.e.
>>>
>>> +                       if (!req->sq->qid) {
>>> +                               nvmet_destroy_auth(ctrl);
>>> +                               if (nvmet_setup_auth(ctrl) < 0) {
>>> +                                       pr_err("Failed to setup
>>> re-authentication\n");
>>> +                                       goto done_failure1;
>>> +                               }
>>> +                       }
>>>
>>>
>>>
>>
>> Not sure. We have two paths how re-authentication can be triggered.
>> The one is from the host, which sends a 'negotiate' command to the
>> controller (ie this path).  Then nothing on the controller has
>> changed, and we just need to ensure that we restart negotiation.
>> IE we should _not_ reset the authentication (as that would also remove
>> the controller keys, which haven't changed). We should just ensure
>> that all ephemeral data is regenerated. But that should be handled
>> in-line, and I _think_ I have covered all of that.
>> The other path to trigger re-authentication is when changing values on
>> the controller via configfs. Then sure we need to reset the controller
>> data, and trigger reauthentication.
>> And there I do agree, that path isn't fully implemented / tested.
>> But should be started whenever the configfs values change.
>>
> Actually, having re-read the spec I'm not sure if the second path is
> correct.
> As per spec only the _host_ can trigger re-authentication. There is no
> provision for the controller to trigger re-authentication, and given
> that re-auth is a soft-state anyway (ie the current authentication stays
> valid until re-auth enters a final state) I _think_ we should be good
> with the current implementation, where we can change the controller keys
> via configfs, but they will only become active once the host triggers
> re-authentication.

Agree, so the proposed addition is good with you?

> And indeed, that's the only way how it could work, otherwise it'll be
> tricky to change keys in a running connection.
> If we were to force renegotiation when changing controller keys we would
> immediately fail the connection, as we cannot guarantee that controller
> _and_ host keys are changed at the same time.

Exactly, changing the hostkey in the controller must not trigger
re-auth, the host will remain connected and operational as it
authenticated before. As the host re-authenticates or reconnect
it needs to authenticate against the new key.

2021-09-27 08:28:49

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 10/12] nvmet: Implement basic In-Band Authentication

On 9/27/21 9:55 AM, Sagi Grimberg wrote:
>
>
> On 9/27/21 10:17 AM, Hannes Reinecke wrote:
>> On 9/27/21 8:40 AM, Hannes Reinecke wrote:
>>> On 9/27/21 12:51 AM, Sagi Grimberg wrote:
>>>>
>>>>> +void nvmet_execute_auth_send(struct nvmet_req *req)
>>>>> +{
>>>>> +    struct nvmet_ctrl *ctrl = req->sq->ctrl;
>>>>> +    struct nvmf_auth_dhchap_success2_data *data;
>>>>> +    void *d;
>>>>> +    u32 tl;
>>>>> +    u16 status = 0;
>>>>> +
>>>>> +    if (req->cmd->auth_send.secp !=
>>>>> NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER) {
>>>>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>>>>> +        req->error_loc =
>>>>> +            offsetof(struct nvmf_auth_send_command, secp);
>>>>> +        goto done;
>>>>> +    }
>>>>> +    if (req->cmd->auth_send.spsp0 != 0x01) {
>>>>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>>>>> +        req->error_loc =
>>>>> +            offsetof(struct nvmf_auth_send_command, spsp0);
>>>>> +        goto done;
>>>>> +    }
>>>>> +    if (req->cmd->auth_send.spsp1 != 0x01) {
>>>>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>>>>> +        req->error_loc =
>>>>> +            offsetof(struct nvmf_auth_send_command, spsp1);
>>>>> +        goto done;
>>>>> +    }
>>>>> +    tl = le32_to_cpu(req->cmd->auth_send.tl);
>>>>> +    if (!tl) {
>>>>> +        status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>>>>> +        req->error_loc =
>>>>> +            offsetof(struct nvmf_auth_send_command, tl);
>>>>> +        goto done;
>>>>> +    }
>>>>> +    if (!nvmet_check_transfer_len(req, tl)) {
>>>>> +        pr_debug("%s: transfer length mismatch (%u)\n", __func__,
>>>>> tl);
>>>>> +        return;
>>>>> +    }
>>>>> +
>>>>> +    d = kmalloc(tl, GFP_KERNEL);
>>>>> +    if (!d) {
>>>>> +        status = NVME_SC_INTERNAL;
>>>>> +        goto done;
>>>>> +    }
>>>>> +
>>>>> +    status = nvmet_copy_from_sgl(req, 0, d, tl);
>>>>> +    if (status) {
>>>>> +        kfree(d);
>>>>> +        goto done;
>>>>> +    }
>>>>> +
>>>>> +    data = d;
>>>>> +    pr_debug("%s: ctrl %d qid %d type %d id %d step %x\n", __func__,
>>>>> +         ctrl->cntlid, req->sq->qid, data->auth_type, data->auth_id,
>>>>> +         req->sq->dhchap_step);
>>>>> +    if (data->auth_type != NVME_AUTH_COMMON_MESSAGES &&
>>>>> +        data->auth_type != NVME_AUTH_DHCHAP_MESSAGES)
>>>>> +        goto done_failure1;
>>>>> +    if (data->auth_type == NVME_AUTH_COMMON_MESSAGES) {
>>>>> +        if (data->auth_id == NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE) {
>>>>> +            /* Restart negotiation */
>>>>> +            pr_debug("%s: ctrl %d qid %d reset negotiation\n",
>>>>> __func__,
>>>>> +                 ctrl->cntlid, req->sq->qid);
>>>>
>>>> This is the point where you need to reset also auth config as this may
>>>> have changed and the host will not create a new controller but rather
>>>> re-authenticate on the existing controller.
>>>>
>>>> i.e.
>>>>
>>>> +                       if (!req->sq->qid) {
>>>> +                               nvmet_destroy_auth(ctrl);
>>>> +                               if (nvmet_setup_auth(ctrl) < 0) {
>>>> +                                       pr_err("Failed to setup
>>>> re-authentication\n");
>>>> +                                       goto done_failure1;
>>>> +                               }
>>>> +                       }
>>>>
>>>>
>>>>
>>>
>>> Not sure. We have two paths how re-authentication can be triggered.
>>> The one is from the host, which sends a 'negotiate' command to the
>>> controller (ie this path).  Then nothing on the controller has
>>> changed, and we just need to ensure that we restart negotiation.
>>> IE we should _not_ reset the authentication (as that would also
>>> remove the controller keys, which haven't changed). We should just
>>> ensure that all ephemeral data is regenerated. But that should be
>>> handled in-line, and I _think_ I have covered all of that.
>>> The other path to trigger re-authentication is when changing values
>>> on the controller via configfs. Then sure we need to reset the
>>> controller data, and trigger reauthentication.
>>> And there I do agree, that path isn't fully implemented / tested.
>>> But should be started whenever the configfs values change.
>>>
>> Actually, having re-read the spec I'm not sure if the second path is
>> correct.
>> As per spec only the _host_ can trigger re-authentication. There is no
>> provision for the controller to trigger re-authentication, and given
>> that re-auth is a soft-state anyway (ie the current authentication
>> stays valid until re-auth enters a final state) I _think_ we should be
>> good with the current implementation, where we can change the
>> controller keys
>> via configfs, but they will only become active once the host triggers
>> re-authentication.
>
> Agree, so the proposed addition is good with you?
>
Why would we need it?
I do agree there's a bit missing for removing the old shash_tfm if there
is a hash-id mismatch, but why would we need to reset the entire
authentication?
The important (ie cryptographically relevant) bits are cleared in
nvmet_auth_sq_free(), and they are cleared after authentication is
completed.
So why would we need to reset keys and TFMs?

>> And indeed, that's the only way how it could work, otherwise it'll be
>> tricky to change keys in a running connection.
>> If we were to force renegotiation when changing controller keys we
>> would immediately fail the connection, as we cannot guarantee that
>> controller _and_ host keys are changed at the same time.
>
> Exactly, changing the hostkey in the controller must not trigger
> re-auth, the host will remain connected and operational as it
> authenticated before. As the host re-authenticates or reconnect
> it needs to authenticate against the new key.

Right. I'll be adding a comment to the configfs functions to the effect.

Cheers,

Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

2021-09-28 22:37:29

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH 10/12] nvmet: Implement basic In-Band Authentication


>>> Actually, having re-read the spec I'm not sure if the second path is
>>> correct.
>>> As per spec only the _host_ can trigger re-authentication. There is no
>>> provision for the controller to trigger re-authentication, and given
>>> that re-auth is a soft-state anyway (ie the current authentication
>>> stays valid until re-auth enters a final state) I _think_ we should be
>>> good with the current implementation, where we can change the
>>> controller keys
>>> via configfs, but they will only become active once the host triggers
>>> re-authentication.
>>
>> Agree, so the proposed addition is good with you?
>>
> Why would we need it?
> I do agree there's a bit missing for removing the old shash_tfm if there
> is a hash-id mismatch, but why would we need to reset the entire
> authentication?

Just need to update the new host dhchap_key from the host at this point
as the host is doing a re-authentication. I agree we don't need a big
hammer but we do need the re-authentication to not access old host
dhchap_key.

2021-09-29 06:12:15

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 10/12] nvmet: Implement basic In-Band Authentication

On 9/29/21 12:36 AM, Sagi Grimberg wrote:
>
>>>> Actually, having re-read the spec I'm not sure if the second path is
>>>> correct.
>>>> As per spec only the _host_ can trigger re-authentication. There is no
>>>> provision for the controller to trigger re-authentication, and given
>>>> that re-auth is a soft-state anyway (ie the current authentication
>>>> stays valid until re-auth enters a final state) I _think_ we should be
>>>> good with the current implementation, where we can change the
>>>> controller keys
>>>> via configfs, but they will only become active once the host triggers
>>>> re-authentication.
>>>
>>> Agree, so the proposed addition is good with you?
>>>
>> Why would we need it?
>> I do agree there's a bit missing for removing the old shash_tfm if there
>> is a hash-id mismatch, but why would we need to reset the entire
>> authentication?
>
> Just need to update the new host dhchap_key from the host at this point
> as the host is doing a re-authentication. I agree we don't need a big
> hammer but we do need the re-authentication to not access old host
> dhchap_key.

Sure. And, upon reviewing, I guess you are right; will be including your
snippet.
For the next round :-)

Cheers,

Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer