2023-09-28 17:50:45

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 00/12] PCI device authentication

Authenticate PCI devices with CMA-SPDM (PCIe r6.1 sec 6.31) and
expose the result in sysfs. This enables user-defined policies
such as forbidding driver binding to devices which failed
authentication.

CMA-SPDM forms the basis for PCI encryption (PCIe r6.1 sec 6.33),
which will be submitted later.

The meat of the series is in patches [07/12] and [08/12], which contain
the SPDM library and the CMA glue code (the PCI-adaption of SPDM).

The reason why SPDM is done in-kernel is provided in patch [10/12]:
Briefly, when devices are reauthenticated on resume from system sleep,
user space is not yet available. Same when reauthenticating after
recovery from reset.

One use case for CMA-SPDM and PCI encryption is confidential access
to passed-through devices: Neither the host nor other guests are
able to eavesdrop on device accesses, in particular if guest memory
is encrypted as well.

Further use cases for the SPDM library are appearing on the horizon:
Alistair Francis and Wilfred Mallawa from WDC are interested in using
it for SCSI/SATA. David Box from Intel has implemented measurement
retrieval over SPDM.

The root of trust is initially an in-kernel key ring of certificates.
We can discuss linking the system key ring into it, thereby allowing
EFI to pass trusted certificates to the kernel for CMA. Alternatively,
a bundle of trusted certificates could be loaded from the initrd.
I envision that we'll add TPMs or remote attestation services such as
https://keylime.dev/ to create an ecosystem of various trust sources.

If you wish to play with PCI device authentication but lack capable
hardware, Wilfred has written a guide how to test with qemu:
https://github.com/twilfredo/spdm-emulation-guide-b

Jonathan Cameron (2):
spdm: Introduce library to authenticate devices
PCI/CMA: Authenticate devices on enumeration

Lukas Wunner (10):
X.509: Make certificate parser public
X.509: Parse Subject Alternative Name in certificates
X.509: Move certificate length retrieval into new helper
certs: Create blacklist keyring earlier
crypto: akcipher - Support more than one signature encoding
crypto: ecdsa - Support P1363 signature encoding
PCI/CMA: Validate Subject Alternative Name in certificates
PCI/CMA: Reauthenticate devices on reset and resume
PCI/CMA: Expose in sysfs whether devices are authenticated
PCI/CMA: Grant guests exclusive control of authentication

Documentation/ABI/testing/sysfs-bus-pci | 27 +
MAINTAINERS | 10 +
certs/blacklist.c | 4 +-
crypto/akcipher.c | 2 +-
crypto/asymmetric_keys/public_key.c | 12 +-
crypto/asymmetric_keys/x509_cert_parser.c | 15 +
crypto/asymmetric_keys/x509_loader.c | 38 +-
crypto/asymmetric_keys/x509_parser.h | 37 +-
crypto/ecdsa.c | 16 +-
crypto/internal.h | 1 +
crypto/rsa-pkcs1pad.c | 11 +-
crypto/sig.c | 6 +-
crypto/testmgr.c | 8 +-
crypto/testmgr.h | 16 +
drivers/pci/Kconfig | 16 +
drivers/pci/Makefile | 5 +
drivers/pci/cma-sysfs.c | 73 +
drivers/pci/cma-x509.c | 119 ++
drivers/pci/cma.asn1 | 36 +
drivers/pci/cma.c | 151 +++
drivers/pci/doe.c | 5 +-
drivers/pci/pci-driver.c | 1 +
drivers/pci/pci-sysfs.c | 3 +
drivers/pci/pci.c | 12 +-
drivers/pci/pci.h | 17 +
drivers/pci/pcie/err.c | 3 +
drivers/pci/probe.c | 1 +
drivers/pci/remove.c | 1 +
drivers/vfio/pci/vfio_pci_core.c | 9 +-
include/crypto/akcipher.h | 10 +-
include/crypto/sig.h | 6 +-
include/keys/asymmetric-type.h | 2 +
include/keys/x509-parser.h | 46 +
include/linux/oid_registry.h | 3 +
include/linux/pci-doe.h | 4 +
include/linux/pci.h | 15 +
include/linux/spdm.h | 41 +
lib/Kconfig | 15 +
lib/Makefile | 2 +
lib/spdm_requester.c | 1510 +++++++++++++++++++++
40 files changed, 2232 insertions(+), 77 deletions(-)
create mode 100644 drivers/pci/cma-sysfs.c
create mode 100644 drivers/pci/cma-x509.c
create mode 100644 drivers/pci/cma.asn1
create mode 100644 drivers/pci/cma.c
create mode 100644 include/keys/x509-parser.h
create mode 100644 include/linux/spdm.h
create mode 100644 lib/spdm_requester.c

--
2.40.1


2023-09-28 17:53:44

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 01/12] X.509: Make certificate parser public

The upcoming support for PCI device authentication with CMA-SPDM
(PCIe r6.1 sec 6.31) requires validating the Subject Alternative Name
in X.509 certificates.

High-level functions for X.509 parsing such as key_create_or_update()
throw away the internal, low-level struct x509_certificate after
extracting the struct public_key and public_key_signature from it.
The Subject Alternative Name is thus inaccessible when using those
functions.

Afford CMA-SPDM access to the Subject Alternative Name by making struct
x509_certificate public, together with the functions for parsing an
X.509 certificate into such a struct and freeing such a struct.

The private header file x509_parser.h previously included <linux/time.h>
for the definition of time64_t. That definition was since moved to
<linux/time64.h> by commit 361a3bf00582 ("time64: Add time64.h header
and define struct timespec64"), so adjust the #include directive as part
of the move to the new public header file <keys/x509-parser.h>.

No functional change intended.

Signed-off-by: Lukas Wunner <[email protected]>
---
crypto/asymmetric_keys/x509_parser.h | 37 +----------------------
include/keys/x509-parser.h | 44 ++++++++++++++++++++++++++++
2 files changed, 45 insertions(+), 36 deletions(-)
create mode 100644 include/keys/x509-parser.h

diff --git a/crypto/asymmetric_keys/x509_parser.h b/crypto/asymmetric_keys/x509_parser.h
index a299c9c56f40..a7ef43c39002 100644
--- a/crypto/asymmetric_keys/x509_parser.h
+++ b/crypto/asymmetric_keys/x509_parser.h
@@ -5,40 +5,7 @@
* Written by David Howells ([email protected])
*/

-#include <linux/time.h>
-#include <crypto/public_key.h>
-#include <keys/asymmetric-type.h>
-
-struct x509_certificate {
- struct x509_certificate *next;
- struct x509_certificate *signer; /* Certificate that signed this one */
- struct public_key *pub; /* Public key details */
- struct public_key_signature *sig; /* Signature parameters */
- char *issuer; /* Name of certificate issuer */
- char *subject; /* Name of certificate subject */
- struct asymmetric_key_id *id; /* Issuer + Serial number */
- struct asymmetric_key_id *skid; /* Subject + subjectKeyId (optional) */
- time64_t valid_from;
- time64_t valid_to;
- const void *tbs; /* Signed data */
- unsigned tbs_size; /* Size of signed data */
- unsigned raw_sig_size; /* Size of signature */
- const void *raw_sig; /* Signature data */
- const void *raw_serial; /* Raw serial number in ASN.1 */
- unsigned raw_serial_size;
- unsigned raw_issuer_size;
- const void *raw_issuer; /* Raw issuer name in ASN.1 */
- const void *raw_subject; /* Raw subject name in ASN.1 */
- unsigned raw_subject_size;
- unsigned raw_skid_size;
- const void *raw_skid; /* Raw subjectKeyId in ASN.1 */
- unsigned index;
- bool seen; /* Infinite recursion prevention */
- bool verified;
- bool self_signed; /* T if self-signed (check unsupported_sig too) */
- bool unsupported_sig; /* T if signature uses unsupported crypto */
- bool blacklisted;
-};
+#include <keys/x509-parser.h>

/*
* selftest.c
@@ -52,8 +19,6 @@ static inline int fips_signature_selftest(void) { return 0; }
/*
* x509_cert_parser.c
*/
-extern void x509_free_certificate(struct x509_certificate *cert);
-extern struct x509_certificate *x509_cert_parse(const void *data, size_t datalen);
extern int x509_decode_time(time64_t *_t, size_t hdrlen,
unsigned char tag,
const unsigned char *value, size_t vlen);
diff --git a/include/keys/x509-parser.h b/include/keys/x509-parser.h
new file mode 100644
index 000000000000..7c2ebc84791f
--- /dev/null
+++ b/include/keys/x509-parser.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/* X.509 certificate parser
+ *
+ * Copyright (C) 2012 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells ([email protected])
+ */
+
+#include <crypto/public_key.h>
+#include <keys/asymmetric-type.h>
+#include <linux/time64.h>
+
+struct x509_certificate {
+ struct x509_certificate *next;
+ struct x509_certificate *signer; /* Certificate that signed this one */
+ struct public_key *pub; /* Public key details */
+ struct public_key_signature *sig; /* Signature parameters */
+ char *issuer; /* Name of certificate issuer */
+ char *subject; /* Name of certificate subject */
+ struct asymmetric_key_id *id; /* Issuer + Serial number */
+ struct asymmetric_key_id *skid; /* Subject + subjectKeyId (optional) */
+ time64_t valid_from;
+ time64_t valid_to;
+ const void *tbs; /* Signed data */
+ unsigned tbs_size; /* Size of signed data */
+ unsigned raw_sig_size; /* Size of signature */
+ const void *raw_sig; /* Signature data */
+ const void *raw_serial; /* Raw serial number in ASN.1 */
+ unsigned raw_serial_size;
+ unsigned raw_issuer_size;
+ const void *raw_issuer; /* Raw issuer name in ASN.1 */
+ const void *raw_subject; /* Raw subject name in ASN.1 */
+ unsigned raw_subject_size;
+ unsigned raw_skid_size;
+ const void *raw_skid; /* Raw subjectKeyId in ASN.1 */
+ unsigned index;
+ bool seen; /* Infinite recursion prevention */
+ bool verified;
+ bool self_signed; /* T if self-signed (check unsupported_sig too) */
+ bool unsupported_sig; /* T if signature uses unsupported crypto */
+ bool blacklisted;
+};
+
+struct x509_certificate *x509_cert_parse(const void *data, size_t datalen);
+void x509_free_certificate(struct x509_certificate *cert);
--
2.40.1

2023-09-28 17:55:52

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 02/12] X.509: Parse Subject Alternative Name in certificates

The upcoming support for PCI device authentication with CMA-SPDM
(PCIe r6.1 sec 6.31) requires validating the Subject Alternative Name
in X.509 certificates.

Store a pointer to the Subject Alternative Name upon parsing for
consumption by CMA-SPDM.

Signed-off-by: Lukas Wunner <[email protected]>
---
crypto/asymmetric_keys/x509_cert_parser.c | 15 +++++++++++++++
include/keys/x509-parser.h | 2 ++
2 files changed, 17 insertions(+)

diff --git a/crypto/asymmetric_keys/x509_cert_parser.c b/crypto/asymmetric_keys/x509_cert_parser.c
index 0a7049b470c1..18dfd564740b 100644
--- a/crypto/asymmetric_keys/x509_cert_parser.c
+++ b/crypto/asymmetric_keys/x509_cert_parser.c
@@ -579,6 +579,21 @@ int x509_process_extension(void *context, size_t hdrlen,
return 0;
}

+ if (ctx->last_oid == OID_subjectAltName) {
+ /*
+ * A certificate MUST NOT include more than one instance
+ * of a particular extension (RFC 5280 sec 4.2).
+ */
+ if (ctx->cert->raw_san) {
+ pr_err("Duplicate Subject Alternative Name\n");
+ return -EINVAL;
+ }
+
+ ctx->cert->raw_san = v;
+ ctx->cert->raw_san_size = vlen;
+ return 0;
+ }
+
if (ctx->last_oid == OID_keyUsage) {
/*
* Get hold of the keyUsage bit string
diff --git a/include/keys/x509-parser.h b/include/keys/x509-parser.h
index 7c2ebc84791f..9c6e7cdf4870 100644
--- a/include/keys/x509-parser.h
+++ b/include/keys/x509-parser.h
@@ -32,6 +32,8 @@ struct x509_certificate {
unsigned raw_subject_size;
unsigned raw_skid_size;
const void *raw_skid; /* Raw subjectKeyId in ASN.1 */
+ const void *raw_san; /* Raw subjectAltName in ASN.1 */
+ unsigned raw_san_size;
unsigned index;
bool seen; /* Infinite recursion prevention */
bool verified;
--
2.40.1

2023-09-28 17:58:38

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 04/12] certs: Create blacklist keyring earlier

The upcoming support for PCI device authentication with CMA-SPDM
(PCIe r6.1 sec 6.31) requires parsing X.509 certificates upon
device enumeration, which happens in a subsys_initcall().

Parsing X.509 certificates accesses the blacklist keyring:
x509_cert_parse()
x509_get_sig_params()
is_hash_blacklisted()
keyring_search()

So far the keyring is created much later in a device_initcall(). Avoid
a NULL pointer dereference on access to the keyring by creating it one
initcall level earlier than PCI device enumeration, i.e. in an
arch_initcall().

Signed-off-by: Lukas Wunner <[email protected]>
---
certs/blacklist.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/certs/blacklist.c b/certs/blacklist.c
index 675dd7a8f07a..34185415d451 100644
--- a/certs/blacklist.c
+++ b/certs/blacklist.c
@@ -311,7 +311,7 @@ static int restrict_link_for_blacklist(struct key *dest_keyring,
* Initialise the blacklist
*
* The blacklist_init() function is registered as an initcall via
- * device_initcall(). As a result if the blacklist_init() function fails for
+ * arch_initcall(). As a result if the blacklist_init() function fails for
* any reason the kernel continues to execute. While cleanly returning -ENODEV
* could be acceptable for some non-critical kernel parts, if the blacklist
* keyring fails to load it defeats the certificate/key based deny list for
@@ -356,7 +356,7 @@ static int __init blacklist_init(void)
/*
* Must be initialised before we try and load the keys into the keyring.
*/
-device_initcall(blacklist_init);
+arch_initcall(blacklist_init);

#ifdef CONFIG_SYSTEM_REVOCATION_LIST
/*
--
2.40.1

2023-09-28 18:06:49

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 07/12] spdm: Introduce library to authenticate devices

From: Jonathan Cameron <[email protected]>

The Security Protocol and Data Model (SPDM) allows for authentication,
measurement, key exchange and encrypted sessions with devices.

A commonly used term for authentication and measurement is attestation.

SPDM was conceived by the Distributed Management Task Force (DMTF).
Its specification defines a request/response protocol spoken between
host and attached devices over a variety of transports:

https://www.dmtf.org/dsp/DSP0274

This implementation supports SPDM 1.0 through 1.3 (the latest version).
It is designed to be transport-agnostic as the kernel already supports
two different SPDM-capable transports:

* PCIe Data Object Exchange (PCIe r6.1 sec 6.30, drivers/pci/doe.c)
* Management Component Transport Protocol (MCTP,
Documentation/networking/mctp.rst)

Use cases for SPDM include, but are not limited to:

* PCIe Component Measurement and Authentication (PCIe r6.1 sec 6.31)
* Compute Express Link (CXL r3.0 sec 14.11.6)
* Open Compute Project (Attestation of System Components r1.0)
https://www.opencompute.org/documents/attestation-v1-0-20201104-pdf

The initial focus of this implementation is enabling PCIe CMA device
authentication. As such, only a subset of the SPDM specification is
contained herein, namely the request/response sequence GET_VERSION,
GET_CAPABILITIES, NEGOTIATE_ALGORITHMS, GET_DIGESTS, GET_CERTIFICATE
and CHALLENGE.

A simple API is provided for subsystems wishing to authenticate devices:
spdm_create(), spdm_authenticate() (can be called repeatedly for
reauthentication) and spdm_destroy(). Certificates presented by devices
are validated against an in-kernel keyring of trusted root certificates.
A pointer to the keyring is passed to spdm_create().

The set of supported cryptographic algorithms is limited to those
declared mandatory in PCIe r6.1 sec 6.31.3. Adding more algorithms
is straightforward as long as the crypto subsystem supports them.

Future commits will extend this implementation with support for
measurement, key exchange and encrypted sessions.

So far, only the SPDM requester role is implemented. Care was taken to
allow for effortless addition of the responder role at a later stage.
This could be needed for a PCIe host bridge operating in endpoint mode.
The responder role will be able to reuse struct definitions and helpers
such as spdm_create_combined_prefix(). Those can be moved to
spdm_common.{h,c} files upon introduction of the responder role.
For now, all is kept in a single source file to avoid polluting the
global namespace with unnecessary symbols.

Credits: Jonathan wrote a proof-of-concept of this SPDM implementation.
Lukas reworked it for upstream.

Signed-off-by: Jonathan Cameron <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
---
MAINTAINERS | 9 +
include/linux/spdm.h | 35 +
lib/Kconfig | 15 +
lib/Makefile | 2 +
lib/spdm_requester.c | 1487 ++++++++++++++++++++++++++++++++++++++++++
5 files changed, 1548 insertions(+)
create mode 100644 include/linux/spdm.h
create mode 100644 lib/spdm_requester.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 90f13281d297..2591d2217d65 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19299,6 +19299,15 @@ M: Security Officers <[email protected]>
S: Supported
F: Documentation/process/security-bugs.rst

+SECURITY PROTOCOL AND DATA MODEL (SPDM)
+M: Jonathan Cameron <[email protected]>
+M: Lukas Wunner <[email protected]>
+L: [email protected]
+L: [email protected]
+S: Maintained
+F: include/linux/spdm.h
+F: lib/spdm*
+
SECURITY SUBSYSTEM
M: Paul Moore <[email protected]>
M: James Morris <[email protected]>
diff --git a/include/linux/spdm.h b/include/linux/spdm.h
new file mode 100644
index 000000000000..e824063793a7
--- /dev/null
+++ b/include/linux/spdm.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * DMTF Security Protocol and Data Model (SPDM)
+ * https://www.dmtf.org/dsp/DSP0274
+ *
+ * Copyright (C) 2021-22 Huawei
+ * Jonathan Cameron <[email protected]>
+ *
+ * Copyright (C) 2022-23 Intel Corporation
+ */
+
+#ifndef _SPDM_H_
+#define _SPDM_H_
+
+#include <linux/types.h>
+
+struct key;
+struct device;
+struct spdm_state;
+
+typedef int (spdm_transport)(void *priv, struct device *dev,
+ const void *request, size_t request_sz,
+ void *response, size_t response_sz);
+
+struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
+ void *transport_priv, u32 transport_sz,
+ struct key *keyring);
+
+int spdm_authenticate(struct spdm_state *spdm_state);
+
+bool spdm_authenticated(struct spdm_state *spdm_state);
+
+void spdm_destroy(struct spdm_state *spdm_state);
+
+#endif
diff --git a/lib/Kconfig b/lib/Kconfig
index c686f4adc124..3516cf1dad16 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -764,3 +764,18 @@ config ASN1_ENCODER

config POLYNOMIAL
tristate
+
+config SPDM_REQUESTER
+ tristate
+ select KEYS
+ select ASYMMETRIC_KEY_TYPE
+ select ASYMMETRIC_PUBLIC_KEY_SUBTYPE
+ select X509_CERTIFICATE_PARSER
+ help
+ The Security Protocol and Data Model (SPDM) allows for authentication,
+ measurement, key exchange and encrypted sessions with devices. This
+ option enables support for the SPDM requester role.
+
+ Crypto algorithms offered to SPDM responders are limited to those
+ enabled in .config. Drivers selecting SPDM_REQUESTER need to also
+ select any algorithms they deem mandatory.
diff --git a/lib/Makefile b/lib/Makefile
index 740109b6e2c8..d9ae58a9ca83 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -315,6 +315,8 @@ obj-$(CONFIG_PERCPU_TEST) += percpu_test.o
obj-$(CONFIG_ASN1) += asn1_decoder.o
obj-$(CONFIG_ASN1_ENCODER) += asn1_encoder.o

+obj-$(CONFIG_SPDM_REQUESTER) += spdm_requester.o
+
obj-$(CONFIG_FONT_SUPPORT) += fonts/

hostprogs := gen_crc32table
diff --git a/lib/spdm_requester.c b/lib/spdm_requester.c
new file mode 100644
index 000000000000..407041036599
--- /dev/null
+++ b/lib/spdm_requester.c
@@ -0,0 +1,1487 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMTF Security Protocol and Data Model (SPDM)
+ * https://www.dmtf.org/dsp/DSP0274
+ *
+ * Copyright (C) 2021-22 Huawei
+ * Jonathan Cameron <[email protected]>
+ *
+ * Copyright (C) 2022-23 Intel Corporation
+ */
+
+#define dev_fmt(fmt) "SPDM: " fmt
+
+#include <linux/dev_printk.h>
+#include <linux/key.h>
+#include <linux/module.h>
+#include <linux/random.h>
+#include <linux/spdm.h>
+
+#include <asm/unaligned.h>
+#include <crypto/hash.h>
+#include <crypto/public_key.h>
+#include <keys/asymmetric-type.h>
+#include <keys/x509-parser.h>
+
+/* SPDM versions supported by this implementation */
+#define SPDM_MIN_VER 0x10
+#define SPDM_MAX_VER 0x13
+
+#define SPDM_CACHE_CAP BIT(0) /* response only */
+#define SPDM_CERT_CAP BIT(1)
+#define SPDM_CHAL_CAP BIT(2)
+#define SPDM_MEAS_CAP_MASK GENMASK(4, 3) /* response only */
+#define SPDM_MEAS_CAP_NO 0 /* response only */
+#define SPDM_MEAS_CAP_MEAS 1 /* response only */
+#define SPDM_MEAS_CAP_MEAS_SIG 2 /* response only */
+#define SPDM_MEAS_FRESH_CAP BIT(5) /* response only */
+#define SPDM_ENCRYPT_CAP BIT(6)
+#define SPDM_MAC_CAP BIT(7)
+#define SPDM_MUT_AUTH_CAP BIT(8)
+#define SPDM_KEY_EX_CAP BIT(9)
+#define SPDM_PSK_CAP_MASK GENMASK(11, 10)
+#define SPDM_PSK_CAP_NO 0
+#define SPDM_PSK_CAP_PSK 1
+#define SPDM_PSK_CAP_PSK_CTX 2 /* response only */
+#define SPDM_ENCAP_CAP BIT(12)
+#define SPDM_HBEAT_CAP BIT(13)
+#define SPDM_KEY_UPD_CAP BIT(14)
+#define SPDM_HANDSHAKE_ITC_CAP BIT(15)
+#define SPDM_PUB_KEY_ID_CAP BIT(16)
+#define SPDM_CHUNK_CAP BIT(17) /* 1.2 */
+#define SPDM_ALIAS_CERT_CAP BIT(18) /* 1.2 response only */
+#define SPDM_SET_CERT_CAP BIT(19) /* 1.2 response only */
+#define SPDM_CSR_CAP BIT(20) /* 1.2 response only */
+#define SPDM_CERT_INST_RESET_CAP BIT(21) /* 1.2 response only */
+#define SPDM_EP_INFO_CAP_MASK GENMASK(23, 22) /* 1.3 */
+#define SPDM_EP_INFO_CAP_NO 0 /* 1.3 */
+#define SPDM_EP_INFO_CAP_RSP 1 /* 1.3 */
+#define SPDM_EP_INFO_CAP_RSP_SIG 2 /* 1.3 */
+#define SPDM_MEL_CAP BIT(24) /* 1.3 response only */
+#define SPDM_EVENT_CAP BIT(25) /* 1.3 */
+#define SPDM_MULTI_KEY_CAP_MASK GENMASK(27, 26) /* 1.3 */
+#define SPDM_MULTI_KEY_CAP_NO 0 /* 1.3 */
+#define SPDM_MULTI_KEY_CAP_ONLY 1 /* 1.3 */
+#define SPDM_MULTI_KEY_CAP_SEL 2 /* 1.3 */
+#define SPDM_GET_KEY_PAIR_INFO_CAP BIT(28) /* 1.3 response only */
+#define SPDM_SET_KEY_PAIR_INFO_CAP BIT(29) /* 1.3 response only */
+
+/* SPDM capabilities supported by this implementation */
+#define SPDM_CAPS (SPDM_CERT_CAP | SPDM_CHAL_CAP)
+
+/* SPDM capabilities required from responders */
+#define SPDM_MIN_CAPS (SPDM_CERT_CAP | SPDM_CHAL_CAP)
+
+/*
+ * SPDM cryptographic timeout of this implementation:
+ * Assume calculations may take up to 1 sec on a busy machine, which equals
+ * roughly 1 << 20. That's within the limits mandated for responders by CMA
+ * (1 << 23 usec, PCIe r6.1 sec 6.31.3) and DOE (1 sec, PCIe r6.1 sec 6.30.2).
+ * Used in GET_CAPABILITIES exchange.
+ */
+#define SPDM_CTEXPONENT 20
+
+#define SPDM_ASYM_RSASSA_2048 BIT(0)
+#define SPDM_ASYM_RSAPSS_2048 BIT(1)
+#define SPDM_ASYM_RSASSA_3072 BIT(2)
+#define SPDM_ASYM_RSAPSS_3072 BIT(3)
+#define SPDM_ASYM_ECDSA_ECC_NIST_P256 BIT(4)
+#define SPDM_ASYM_RSASSA_4096 BIT(5)
+#define SPDM_ASYM_RSAPSS_4096 BIT(6)
+#define SPDM_ASYM_ECDSA_ECC_NIST_P384 BIT(7)
+#define SPDM_ASYM_ECDSA_ECC_NIST_P521 BIT(8)
+#define SPDM_ASYM_SM2_ECC_SM2_P256 BIT(9)
+#define SPDM_ASYM_EDDSA_ED25519 BIT(10)
+#define SPDM_ASYM_EDDSA_ED448 BIT(11)
+
+#define SPDM_HASH_SHA_256 BIT(0)
+#define SPDM_HASH_SHA_384 BIT(1)
+#define SPDM_HASH_SHA_512 BIT(2)
+#define SPDM_HASH_SHA3_256 BIT(3)
+#define SPDM_HASH_SHA3_384 BIT(4)
+#define SPDM_HASH_SHA3_512 BIT(5)
+#define SPDM_HASH_SM3_256 BIT(6)
+
+#if IS_ENABLED(CONFIG_CRYPTO_RSA)
+#define SPDM_ASYM_RSA SPDM_ASYM_RSASSA_2048 | \
+ SPDM_ASYM_RSASSA_3072 | \
+ SPDM_ASYM_RSASSA_4096 |
+#endif
+
+#if IS_ENABLED(CONFIG_CRYPTO_ECDSA)
+#define SPDM_ASYM_ECDSA SPDM_ASYM_ECDSA_ECC_NIST_P256 | \
+ SPDM_ASYM_ECDSA_ECC_NIST_P384 |
+#endif
+
+#if IS_ENABLED(CONFIG_CRYPTO_SHA256)
+#define SPDM_HASH_SHA2_256 SPDM_HASH_SHA_256 |
+#endif
+
+#if IS_ENABLED(CONFIG_CRYPTO_SHA512)
+#define SPDM_HASH_SHA2_384_512 SPDM_HASH_SHA_384 | \
+ SPDM_HASH_SHA_512 |
+#endif
+
+/* SPDM algorithms supported by this implementation */
+#define SPDM_ASYM_ALGOS (SPDM_ASYM_RSA \
+ SPDM_ASYM_ECDSA 0)
+
+#define SPDM_HASH_ALGOS (SPDM_HASH_SHA2_256 \
+ SPDM_HASH_SHA2_384_512 0)
+
+/*
+ * Common header shared by all messages.
+ * Note that the meaning of param1 and param2 is message dependent.
+ */
+struct spdm_header {
+ u8 version;
+ u8 code; /* RequestResponseCode */
+ u8 param1;
+ u8 param2;
+} __packed;
+
+#define SPDM_REQ 0x80
+#define SPDM_GET_VERSION 0x84
+
+struct spdm_get_version_req {
+ u8 version;
+ u8 code;
+ u8 param1;
+ u8 param2;
+} __packed;
+
+struct spdm_get_version_rsp {
+ u8 version;
+ u8 code;
+ u8 param1;
+ u8 param2;
+
+ u8 reserved;
+ u8 version_number_entry_count;
+ __le16 version_number_entries[];
+} __packed;
+
+#define SPDM_GET_CAPABILITIES 0xE1
+#define SPDM_MIN_DATA_TRANSFER_SIZE 42 /* SPDM 1.2.0 margin no 226 */
+
+/* For this exchange the request and response messages have the same form */
+struct spdm_get_capabilities_reqrsp {
+ u8 version;
+ u8 code;
+ u8 param1;
+ u8 param2;
+ /* End of SPDM 1.0 structure */
+
+ u8 reserved1;
+ u8 ctexponent;
+ u16 reserved2;
+
+ __le32 flags;
+ /* End of SPDM 1.1 structure */
+
+ __le32 data_transfer_size; /* 1.2+ */
+ __le32 max_spdm_msg_size; /* 1.2+ */
+} __packed;
+
+#define SPDM_NEGOTIATE_ALGS 0xE3
+
+struct spdm_negotiate_algs_req {
+ u8 version;
+ u8 code;
+ u8 param1; /* Number of ReqAlgStruct entries at end */
+ u8 param2;
+
+ __le16 length;
+ u8 measurement_specification;
+ u8 other_params_support; /* 1.2+ */
+
+ __le32 base_asym_algo;
+ __le32 base_hash_algo;
+
+ u8 reserved1[12];
+ u8 ext_asym_count;
+ u8 ext_hash_count;
+ u8 reserved2;
+ u8 mel_specification; /* 1.3+ */
+
+ /*
+ * Additional optional fields at end of this structure:
+ * - ExtAsym: 4 bytes * ext_asym_count
+ * - ExtHash: 4 bytes * ext_hash_count
+ * - ReqAlgStruct: variable size * param1 * 1.1+ *
+ */
+} __packed;
+
+struct spdm_negotiate_algs_rsp {
+ u8 version;
+ u8 code;
+ u8 param1; /* Number of RespAlgStruct entries at end */
+ u8 param2;
+
+ __le16 length;
+ u8 measurement_specification_sel;
+ u8 other_params_sel; /* 1.2+ */
+
+ __le32 measurement_hash_algo;
+ __le32 base_asym_sel;
+ __le32 base_hash_sel;
+
+ u8 reserved1[11];
+ u8 mel_specification_sel; /* 1.3+ */
+ u8 ext_asym_sel_count; /* Either 0 or 1 */
+ u8 ext_hash_sel_count; /* Either 0 or 1 */
+ u8 reserved2[2];
+
+ /*
+ * Additional optional fields at end of this structure:
+ * - ExtAsym: 4 bytes * ext_asym_count
+ * - ExtHash: 4 bytes * ext_hash_count
+ * - RespAlgStruct: variable size * param1 * 1.1+ *
+ */
+} __packed;
+
+struct spdm_req_alg_struct {
+ u8 alg_type;
+ u8 alg_count; /* 0x2K where K is number of alg_external entries */
+ __le16 alg_supported; /* Size is in alg_count[7:4], always 2 */
+ __le32 alg_external[];
+} __packed;
+
+#define SPDM_GET_DIGESTS 0x81
+
+struct spdm_get_digests_req {
+ u8 version;
+ u8 code;
+ u8 param1; /* Reserved */
+ u8 param2; /* Reserved */
+} __packed;
+
+struct spdm_get_digests_rsp {
+ u8 version;
+ u8 code;
+ u8 param1; /* SupportedSlotMask */ /* 1.3+ */
+ u8 param2; /* ProvisionedSlotMask */
+ u8 digests[]; /* Hash of struct spdm_cert_chain for each slot */
+ /* End of SPDM 1.2 structure */
+
+ /*
+ * Additional optional fields at end of this structure:
+ * (omitted as long as we do not advertise MULTI_KEY_CAP)
+ * - KeyPairID: 1 byte for each slot * 1.3+ *
+ * - CertificateInfo: 1 byte for each slot * 1.3+ *
+ * - KeyUsageMask: 2 bytes for each slot * 1.3+ *
+ */
+} __packed;
+
+#define SPDM_GET_CERTIFICATE 0x82
+#define SPDM_SLOTS 8 /* SPDM 1.0.0 section 4.9.2.1 */
+
+struct spdm_get_certificate_req {
+ u8 version;
+ u8 code;
+ u8 param1; /* Slot number 0..7 */
+ u8 param2; /* SlotSizeRequested */ /* 1.3+ */
+ __le16 offset;
+ __le16 length;
+} __packed;
+
+struct spdm_get_certificate_rsp {
+ u8 version;
+ u8 code;
+ u8 param1; /* Slot number 0..7 */
+ u8 param2; /* CertModel */ /* 1.3+ */
+ __le16 portion_length;
+ __le16 remainder_length;
+ u8 cert_chain[]; /* PortionLength long */
+} __packed;
+
+struct spdm_cert_chain {
+ __le16 length;
+ u8 reserved[2];
+ /*
+ * Additional fields at end of this structure:
+ * - RootHash: Digest of Root Certificate
+ * - Certificates: Chain of ASN.1 DER-encoded X.509 v3 certificates
+ */
+} __packed;
+
+#define SPDM_CHALLENGE 0x83
+#define SPDM_MAX_OPAQUE_DATA 1024 /* SPDM 1.0.0 table 21 */
+
+struct spdm_challenge_req {
+ u8 version;
+ u8 code;
+ u8 param1; /* Slot number 0..7 */
+ u8 param2; /* MeasurementSummaryHash type */
+ u8 nonce[32];
+ /* End of SPDM 1.2 structure */
+
+ u8 context[8]; /* 1.3+ */
+} __packed;
+
+struct spdm_challenge_rsp {
+ u8 version;
+ u8 code;
+ u8 param1; /* Slot number 0..7 */
+ u8 param2; /* Slot mask */
+ /*
+ * Additional fields at end of this structure:
+ * - CertChainHash: Hash of struct spdm_cert_chain for selected slot
+ * - Nonce: 32 bytes long
+ * - MeasurementSummaryHash: Optional hash of selected measurements
+ * - OpaqueDataLength: 2 bytes long
+ * - OpaqueData: Up to 1024 bytes long
+ * - RequesterContext: 8 bytes long * 1.3+ *
+ * - Signature
+ */
+} __packed;
+
+#define SPDM_ERROR 0x7f
+
+enum spdm_error_code {
+ spdm_invalid_request = 0x01,
+ spdm_invalid_session = 0x02, /* 1.1 only */
+ spdm_busy = 0x03,
+ spdm_unexpected_request = 0x04,
+ spdm_unspecified = 0x05,
+ spdm_decrypt_error = 0x06,
+ spdm_unsupported_request = 0x07,
+ spdm_request_in_flight = 0x08,
+ spdm_invalid_response_code = 0x09,
+ spdm_session_limit_exceeded = 0x0a,
+ spdm_session_required = 0x0b,
+ spdm_reset_required = 0x0c,
+ spdm_response_too_large = 0x0d,
+ spdm_request_too_large = 0x0e,
+ spdm_large_response = 0x0f,
+ spdm_message_lost = 0x10,
+ spdm_invalid_policy = 0x11, /* 1.3+ */
+ spdm_version_mismatch = 0x41,
+ spdm_response_not_ready = 0x42,
+ spdm_request_resynch = 0x43,
+ spdm_operation_failed = 0x44, /* 1.3+ */
+ spdm_no_pending_requests = 0x45, /* 1.3+ */
+ spdm_vendor_defined_error = 0xff,
+};
+
+struct spdm_error_rsp {
+ u8 version;
+ u8 code;
+ enum spdm_error_code error_code:8;
+ u8 error_data;
+
+ u8 extended_error_data[];
+} __packed;
+
+static int spdm_err(struct device *dev, struct spdm_error_rsp *rsp)
+{
+ switch (rsp->error_code) {
+ case spdm_invalid_request:
+ dev_err(dev, "Invalid request\n");
+ return -EINVAL;
+ case spdm_invalid_session:
+ if (rsp->version == 0x11) {
+ dev_err(dev, "Invalid session %#x\n", rsp->error_data);
+ return -EINVAL;
+ }
+ break;
+ case spdm_busy:
+ dev_err(dev, "Busy\n");
+ return -EBUSY;
+ case spdm_unexpected_request:
+ dev_err(dev, "Unexpected request\n");
+ return -EINVAL;
+ case spdm_unspecified:
+ dev_err(dev, "Unspecified error\n");
+ return -EINVAL;
+ case spdm_decrypt_error:
+ dev_err(dev, "Decrypt error\n");
+ return -EIO;
+ case spdm_unsupported_request:
+ dev_err(dev, "Unsupported request %#x\n", rsp->error_data);
+ return -EINVAL;
+ case spdm_request_in_flight:
+ dev_err(dev, "Request in flight\n");
+ return -EINVAL;
+ case spdm_invalid_response_code:
+ dev_err(dev, "Invalid response code\n");
+ return -EINVAL;
+ case spdm_session_limit_exceeded:
+ dev_err(dev, "Session limit exceeded\n");
+ return -EBUSY;
+ case spdm_session_required:
+ dev_err(dev, "Session required\n");
+ return -EINVAL;
+ case spdm_reset_required:
+ dev_err(dev, "Reset required\n");
+ return -ERESTART;
+ case spdm_response_too_large:
+ dev_err(dev, "Response too large\n");
+ return -EINVAL;
+ case spdm_request_too_large:
+ dev_err(dev, "Request too large\n");
+ return -EINVAL;
+ case spdm_large_response:
+ dev_err(dev, "Large response\n");
+ return -EMSGSIZE;
+ case spdm_message_lost:
+ dev_err(dev, "Message lost\n");
+ return -EIO;
+ case spdm_invalid_policy:
+ dev_err(dev, "Invalid policy\n");
+ return -EINVAL;
+ case spdm_version_mismatch:
+ dev_err(dev, "Version mismatch\n");
+ return -EINVAL;
+ case spdm_response_not_ready:
+ dev_err(dev, "Response not ready\n");
+ return -EINPROGRESS;
+ case spdm_request_resynch:
+ dev_err(dev, "Request resynchronization\n");
+ return -ERESTART;
+ case spdm_operation_failed:
+ dev_err(dev, "Operation failed\n");
+ return -EINVAL;
+ case spdm_no_pending_requests:
+ return -ENOENT;
+ case spdm_vendor_defined_error:
+ dev_err(dev, "Vendor defined error\n");
+ return -EINVAL;
+ }
+
+ dev_err(dev, "Undefined error %#x\n", rsp->error_code);
+ return -EINVAL;
+}
+
+/**
+ * struct spdm_state - SPDM session state
+ *
+ * @lock: Serializes multiple concurrent spdm_authenticate() calls.
+ * @authenticated: Whether device was authenticated successfully.
+ * @dev: Transport device. Used for error reporting and passed to @transport.
+ * @transport: Transport function to perform one message exchange.
+ * @transport_priv: Transport private data.
+ * @transport_sz: Maximum message size the transport is capable of (in bytes).
+ * Used as DataTransferSize in GET_CAPABILITIES exchange.
+ * @version: Maximum common supported version of requester and responder.
+ * Negotiated during GET_VERSION exchange.
+ * @responder_caps: Cached capabilities of responder.
+ * Received during GET_CAPABILITIES exchange.
+ * @base_asym_alg: Asymmetric key algorithm for signature verification of
+ * CHALLENGE_AUTH messages.
+ * Selected by responder during NEGOTIATE_ALGORITHMS exchange.
+ * @base_hash_alg: Hash algorithm for signature verification of
+ * CHALLENGE_AUTH messages.
+ * Selected by responder during NEGOTIATE_ALGORITHMS exchange.
+ * @slot_mask: Bitmask of populated certificate slots in the responder.
+ * Received during GET_DIGESTS exchange.
+ * @base_asym_enc: Human-readable name of @base_asym_alg's signature encoding.
+ * Passed to crypto subsystem when calling verify_signature().
+ * @s: Signature length of @base_asym_alg (in bytes). S or SigLen in SPDM
+ * specification.
+ * @base_hash_alg_name: Human-readable name of @base_hash_alg.
+ * Passed to crypto subsystem when calling crypto_alloc_shash() and
+ * verify_signature().
+ * @shash: Synchronous hash handle for @base_hash_alg computation.
+ * @desc: Synchronous hash context for @base_hash_alg computation.
+ * @h: Hash length of @base_hash_alg (in bytes). H in SPDM specification.
+ * @leaf_key: Public key portion of leaf certificate against which to check
+ * responder's signatures.
+ * @root_keyring: Keyring against which to check the first certificate in
+ * responder's certificate chain.
+ */
+struct spdm_state {
+ struct mutex lock;
+ unsigned int authenticated:1;
+
+ /* Transport */
+ struct device *dev;
+ spdm_transport *transport;
+ void *transport_priv;
+ u32 transport_sz;
+
+ /* Negotiated state */
+ u8 version;
+ u32 responder_caps;
+ u32 base_asym_alg;
+ u32 base_hash_alg;
+ unsigned long slot_mask;
+
+ /* Signature algorithm */
+ const char *base_asym_enc;
+ size_t s;
+
+ /* Hash algorithm */
+ const char *base_hash_alg_name;
+ struct crypto_shash *shash;
+ struct shash_desc *desc;
+ size_t h;
+
+ /* Certificates */
+ struct public_key *leaf_key;
+ struct key *root_keyring;
+};
+
+static int __spdm_exchange(struct spdm_state *spdm_state,
+ const void *req, size_t req_sz,
+ void *rsp, size_t rsp_sz)
+{
+ const struct spdm_header *request = req;
+ struct spdm_header *response = rsp;
+ int length;
+ int rc;
+
+ rc = spdm_state->transport(spdm_state->transport_priv, spdm_state->dev,
+ req, req_sz, rsp, rsp_sz);
+ if (rc < 0)
+ return rc;
+
+ length = rc;
+ if (length < sizeof(struct spdm_header))
+ return -EPROTO;
+
+ if (response->code == SPDM_ERROR)
+ return spdm_err(spdm_state->dev, (struct spdm_error_rsp *)rsp);
+
+ if (response->code != (request->code & ~SPDM_REQ)) {
+ dev_err(spdm_state->dev,
+ "Response code %#x does not match request code %#x\n",
+ response->code, request->code);
+ return -EPROTO;
+ }
+
+ return length;
+}
+
+static int spdm_exchange(struct spdm_state *spdm_state,
+ void *req, size_t req_sz, void *rsp, size_t rsp_sz)
+{
+ struct spdm_header *req_header = req;
+
+ if (req_sz < sizeof(struct spdm_header) ||
+ rsp_sz < sizeof(struct spdm_header))
+ return -EINVAL;
+
+ req_header->version = spdm_state->version;
+
+ return __spdm_exchange(spdm_state, req, req_sz, rsp, rsp_sz);
+}
+
+static const struct spdm_get_version_req spdm_get_version_req = {
+ .version = 0x10,
+ .code = SPDM_GET_VERSION,
+};
+
+static int spdm_get_version(struct spdm_state *spdm_state,
+ struct spdm_get_version_rsp *rsp, size_t *rsp_sz)
+{
+ u8 version = SPDM_MIN_VER;
+ bool foundver = false;
+ int rc, length, i;
+
+ /*
+ * Bypass spdm_exchange() to be able to set version = 0x10.
+ * rsp buffer is large enough for the maximum possible 255 entries.
+ */
+ rc = __spdm_exchange(spdm_state, &spdm_get_version_req,
+ sizeof(spdm_get_version_req), rsp,
+ struct_size(rsp, version_number_entries, 255));
+ if (rc < 0)
+ return rc;
+
+ length = rc;
+ if (length < sizeof(*rsp) ||
+ length < struct_size(rsp, version_number_entries,
+ rsp->version_number_entry_count)) {
+ dev_err(spdm_state->dev, "Truncated version response\n");
+ return -EIO;
+ }
+
+ for (i = 0; i < rsp->version_number_entry_count; i++) {
+ u8 ver = get_unaligned_le16(&rsp->version_number_entries[i]) >> 8;
+
+ if (ver >= version && ver <= SPDM_MAX_VER) {
+ foundver = true;
+ version = ver;
+ }
+ }
+ if (!foundver) {
+ dev_err(spdm_state->dev, "No common supported version\n");
+ return -EPROTO;
+ }
+ spdm_state->version = version;
+
+ *rsp_sz = struct_size(rsp, version_number_entries,
+ rsp->version_number_entry_count);
+
+ return 0;
+}
+
+static int spdm_get_capabilities(struct spdm_state *spdm_state,
+ struct spdm_get_capabilities_reqrsp *req,
+ size_t *reqrsp_sz)
+{
+ struct spdm_get_capabilities_reqrsp *rsp;
+ size_t req_sz;
+ size_t rsp_sz;
+ int rc, length;
+
+ req->code = SPDM_GET_CAPABILITIES;
+ req->ctexponent = SPDM_CTEXPONENT;
+ req->flags = cpu_to_le32(SPDM_CAPS);
+
+ if (spdm_state->version == 0x10) {
+ req_sz = offsetof(typeof(*req), reserved1);
+ rsp_sz = offsetof(typeof(*rsp), data_transfer_size);
+ } else if (spdm_state->version == 0x11) {
+ req_sz = offsetof(typeof(*req), data_transfer_size);
+ rsp_sz = offsetof(typeof(*rsp), data_transfer_size);
+ } else {
+ req_sz = sizeof(*req);
+ rsp_sz = sizeof(*rsp);
+ req->data_transfer_size = cpu_to_le32(spdm_state->transport_sz);
+ req->max_spdm_msg_size = cpu_to_le32(spdm_state->transport_sz);
+ }
+
+ rsp = (void *)req + req_sz;
+
+ rc = spdm_exchange(spdm_state, req, req_sz, rsp, rsp_sz);
+ if (rc < 0)
+ return rc;
+
+ length = rc;
+ if (length < rsp_sz) {
+ dev_err(spdm_state->dev, "Truncated capabilities response\n");
+ return -EIO;
+ }
+
+ spdm_state->responder_caps = le32_to_cpu(rsp->flags);
+ if ((spdm_state->responder_caps & SPDM_MIN_CAPS) != SPDM_MIN_CAPS)
+ return -EPROTONOSUPPORT;
+
+ if (spdm_state->version >= 0x12) {
+ u32 data_transfer_size = le32_to_cpu(rsp->data_transfer_size);
+ if (data_transfer_size < SPDM_MIN_DATA_TRANSFER_SIZE) {
+ dev_err(spdm_state->dev,
+ "Malformed capabilities response\n");
+ return -EPROTO;
+ }
+ spdm_state->transport_sz = min(spdm_state->transport_sz,
+ data_transfer_size);
+ }
+
+ *reqrsp_sz += req_sz + rsp_sz;
+
+ return 0;
+}
+
+/**
+ * spdm_start_hash() - Build first part of CHALLENGE_AUTH hash
+ *
+ * @spdm_state: SPDM session state
+ * @transcript: GET_VERSION request and GET_CAPABILITIES request and response
+ * @transcript_sz: length of @transcript
+ * @req: NEGOTIATE_ALGORITHMS request
+ * @req_sz: length of @req
+ * @rsp: ALGORITHMS response
+ * @rsp_sz: length of @rsp
+ *
+ * We've just learned the hash algorithm to use for CHALLENGE_AUTH signature
+ * verification. Hash the GET_VERSION and GET_CAPABILITIES exchanges which
+ * have been stashed in @transcript, as well as the NEGOTIATE_ALGORITHMS
+ * exchange which has just been performed. Subsequent requests and responses
+ * will be added to the hash as they become available.
+ *
+ * Return 0 on success or a negative errno.
+ */
+static int spdm_start_hash(struct spdm_state *spdm_state,
+ void *transcript, size_t transcript_sz,
+ void *req, size_t req_sz, void *rsp, size_t rsp_sz)
+{
+ int rc;
+
+ spdm_state->shash = crypto_alloc_shash(spdm_state->base_hash_alg_name,
+ 0, 0);
+ if (!spdm_state->shash)
+ return -ENOMEM;
+
+ spdm_state->desc = kzalloc(sizeof(*spdm_state->desc) +
+ crypto_shash_descsize(spdm_state->shash),
+ GFP_KERNEL);
+ if (!spdm_state->desc)
+ return -ENOMEM;
+
+ spdm_state->desc->tfm = spdm_state->shash;
+
+ /* Used frequently to compute offsets, so cache H */
+ spdm_state->h = crypto_shash_digestsize(spdm_state->shash);
+
+ rc = crypto_shash_init(spdm_state->desc);
+ if (rc)
+ return rc;
+
+ rc = crypto_shash_update(spdm_state->desc,
+ (u8 *)&spdm_get_version_req,
+ sizeof(spdm_get_version_req));
+ if (rc)
+ return rc;
+
+ rc = crypto_shash_update(spdm_state->desc,
+ (u8 *)transcript, transcript_sz);
+ if (rc)
+ return rc;
+
+ rc = crypto_shash_update(spdm_state->desc, (u8 *)req, req_sz);
+ if (rc)
+ return rc;
+
+ rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp, rsp_sz);
+
+ return rc;
+}
+
+static int spdm_parse_algs(struct spdm_state *spdm_state)
+{
+ switch (spdm_state->base_asym_alg) {
+ case SPDM_ASYM_RSASSA_2048:
+ spdm_state->s = 256;
+ spdm_state->base_asym_enc = "pkcs1";
+ break;
+ case SPDM_ASYM_RSASSA_3072:
+ spdm_state->s = 384;
+ spdm_state->base_asym_enc = "pkcs1";
+ break;
+ case SPDM_ASYM_RSASSA_4096:
+ spdm_state->s = 512;
+ spdm_state->base_asym_enc = "pkcs1";
+ break;
+ case SPDM_ASYM_ECDSA_ECC_NIST_P256:
+ spdm_state->s = 64;
+ spdm_state->base_asym_enc = "p1363";
+ break;
+ case SPDM_ASYM_ECDSA_ECC_NIST_P384:
+ spdm_state->s = 96;
+ spdm_state->base_asym_enc = "p1363";
+ break;
+ default:
+ dev_err(spdm_state->dev, "Unknown asym algorithm\n");
+ return -EINVAL;
+ }
+
+ switch (spdm_state->base_hash_alg) {
+ case SPDM_HASH_SHA_256:
+ spdm_state->base_hash_alg_name = "sha256";
+ break;
+ case SPDM_HASH_SHA_384:
+ spdm_state->base_hash_alg_name = "sha384";
+ break;
+ case SPDM_HASH_SHA_512:
+ spdm_state->base_hash_alg_name = "sha512";
+ break;
+ default:
+ dev_err(spdm_state->dev, "Unknown hash algorithm\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int spdm_negotiate_algs(struct spdm_state *spdm_state,
+ void *transcript, size_t transcript_sz)
+{
+ struct spdm_req_alg_struct *req_alg_struct;
+ struct spdm_negotiate_algs_req *req;
+ struct spdm_negotiate_algs_rsp *rsp;
+ size_t req_sz = sizeof(*req);
+ size_t rsp_sz = sizeof(*rsp);
+ int rc, length;
+
+ /* Request length shall be <= 128 bytes (SPDM 1.1.0 margin no 185) */
+ BUILD_BUG_ON(req_sz > 128);
+
+ req = kzalloc(req_sz, GFP_KERNEL);
+ if (!req)
+ return -ENOMEM;
+
+ req->code = SPDM_NEGOTIATE_ALGS;
+ req->length = cpu_to_le16(req_sz);
+ req->base_asym_algo = cpu_to_le32(SPDM_ASYM_ALGOS);
+ req->base_hash_algo = cpu_to_le32(SPDM_HASH_ALGOS);
+
+ rsp = kzalloc(rsp_sz, GFP_KERNEL);
+ if (!rsp) {
+ rc = -ENOMEM;
+ goto err_free_req;
+ }
+
+ rc = spdm_exchange(spdm_state, req, req_sz, rsp, rsp_sz);
+ if (rc < 0)
+ goto err_free_rsp;
+
+ length = rc;
+ if (length < sizeof(*rsp) ||
+ length < sizeof(*rsp) + rsp->param1 * sizeof(*req_alg_struct)) {
+ dev_err(spdm_state->dev, "Truncated algorithms response\n");
+ rc = -EIO;
+ goto err_free_rsp;
+ }
+
+ spdm_state->base_asym_alg =
+ le32_to_cpu(rsp->base_asym_sel) & SPDM_ASYM_ALGOS;
+ spdm_state->base_hash_alg =
+ le32_to_cpu(rsp->base_hash_sel) & SPDM_HASH_ALGOS;
+
+ /* Responder shall select exactly 1 alg (SPDM 1.0.0 table 14) */
+ if (hweight32(spdm_state->base_asym_alg) != 1 ||
+ hweight32(spdm_state->base_hash_alg) != 1 ||
+ rsp->ext_asym_sel_count != 0 ||
+ rsp->ext_hash_sel_count != 0 ||
+ rsp->param1 > req->param1) {
+ dev_err(spdm_state->dev, "Malformed algorithms response\n");
+ rc = -EPROTO;
+ goto err_free_rsp;
+ }
+
+ rc = spdm_parse_algs(spdm_state);
+ if (rc)
+ goto err_free_rsp;
+
+ /*
+ * If request contained a ReqAlgStruct not supported by responder,
+ * the corresponding RespAlgStruct may be omitted in response.
+ * Calculate the actual (possibly shorter) response length:
+ */
+ rsp_sz = sizeof(*rsp) + rsp->param1 * sizeof(*req_alg_struct);
+
+ rc = spdm_start_hash(spdm_state, transcript, transcript_sz,
+ req, req_sz, rsp, rsp_sz);
+
+err_free_rsp:
+ kfree(rsp);
+err_free_req:
+ kfree(req);
+
+ return rc;
+}
+
+static int spdm_get_digests(struct spdm_state *spdm_state)
+{
+ struct spdm_get_digests_req req = { .code = SPDM_GET_DIGESTS };
+ struct spdm_get_digests_rsp *rsp;
+ size_t rsp_sz;
+ int rc, length;
+
+ /*
+ * Assume all 8 slots are populated. We know the hash length (and thus
+ * the response size) because the responder only returns digests for
+ * the hash algorithm selected during the NEGOTIATE_ALGORITHMS exchange
+ * (SPDM 1.1.2 margin no 206).
+ */
+ rsp_sz = sizeof(*rsp) + SPDM_SLOTS * spdm_state->h;
+ rsp = kzalloc(rsp_sz, GFP_KERNEL);
+ if (!rsp)
+ return -ENOMEM;
+
+ rc = spdm_exchange(spdm_state, &req, sizeof(req), rsp, rsp_sz);
+ if (rc < 0)
+ goto err_free_rsp;
+
+ length = rc;
+ if (length < sizeof(*rsp) ||
+ length < sizeof(*rsp) + hweight8(rsp->param2) * spdm_state->h) {
+ dev_err(spdm_state->dev, "Truncated digests response\n");
+ rc = -EIO;
+ goto err_free_rsp;
+ }
+
+ rsp_sz = sizeof(*rsp) + hweight8(rsp->param2) * spdm_state->h;
+
+ /*
+ * Authentication-capable endpoints must carry at least 1 cert chain
+ * (SPDM 1.0.0 section 4.9.2.1).
+ */
+ spdm_state->slot_mask = rsp->param2;
+ if (!spdm_state->slot_mask) {
+ dev_err(spdm_state->dev, "No certificates provisioned\n");
+ rc = -EPROTO;
+ goto err_free_rsp;
+ }
+
+ rc = crypto_shash_update(spdm_state->desc, (u8 *)&req, sizeof(req));
+ if (rc)
+ goto err_free_rsp;
+
+ rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp, rsp_sz);
+
+err_free_rsp:
+ kfree(rsp);
+
+ return rc;
+}
+
+static int spdm_validate_cert_chain(struct spdm_state *spdm_state, u8 slot,
+ u8 *certs, size_t total_length)
+{
+ struct x509_certificate *cert, *prev = NULL;
+ bool is_leaf_cert;
+ size_t offset = 0;
+ struct key *key;
+ int rc, length;
+
+ while (offset < total_length) {
+ rc = x509_get_certificate_length(certs + offset,
+ total_length - offset);
+ if (rc < 0) {
+ dev_err(spdm_state->dev, "Invalid certificate length "
+ "at slot %u offset %zu\n", slot, offset);
+ goto err_free_prev;
+ }
+
+ length = rc;
+ is_leaf_cert = offset + length == total_length;
+
+ cert = x509_cert_parse(certs + offset, length);
+ if (IS_ERR(cert)) {
+ rc = PTR_ERR(cert);
+ dev_err(spdm_state->dev, "Certificate parse error %d "
+ "at slot %u offset %zu\n", rc, slot, offset);
+ goto err_free_prev;
+ }
+ if ((is_leaf_cert ==
+ test_bit(KEY_EFLAG_CA, &cert->pub->key_eflags)) ||
+ (is_leaf_cert &&
+ !test_bit(KEY_EFLAG_DIGITALSIG, &cert->pub->key_eflags))) {
+ rc = -EKEYREJECTED;
+ dev_err(spdm_state->dev, "Malformed certificate "
+ "at slot %u offset %zu\n", slot, offset);
+ goto err_free_cert;
+ }
+ if (cert->unsupported_sig) {
+ rc = -EKEYREJECTED;
+ dev_err(spdm_state->dev, "Unsupported signature "
+ "at slot %u offset %zu\n", slot, offset);
+ goto err_free_cert;
+ }
+ if (cert->blacklisted) {
+ rc = -EKEYREJECTED;
+ goto err_free_cert;
+ }
+
+ if (!prev) {
+ /* First cert in chain, check against root_keyring */
+ key = find_asymmetric_key(spdm_state->root_keyring,
+ cert->sig->auth_ids[0],
+ cert->sig->auth_ids[1],
+ cert->sig->auth_ids[2],
+ false);
+ if (IS_ERR(key)) {
+ dev_info(spdm_state->dev, "Root certificate "
+ "for slot %u not found in %s "
+ "keyring: %s\n", slot,
+ spdm_state->root_keyring->description,
+ cert->issuer);
+ rc = PTR_ERR(key);
+ goto err_free_cert;
+ }
+
+ rc = verify_signature(key, cert->sig);
+ key_put(key);
+ } else {
+ /* Subsequent cert in chain, check against previous */
+ rc = public_key_verify_signature(prev->pub, cert->sig);
+ }
+
+ if (rc) {
+ dev_err(spdm_state->dev, "Signature validation error "
+ "%d at slot %u offset %zu\n", rc, slot, offset);
+ goto err_free_cert;
+ }
+
+ x509_free_certificate(prev);
+ offset += length;
+ prev = cert;
+ }
+
+ prev = NULL;
+ spdm_state->leaf_key = cert->pub;
+ cert->pub = NULL;
+
+err_free_cert:
+ x509_free_certificate(cert);
+err_free_prev:
+ x509_free_certificate(prev);
+ return rc;
+}
+
+static int spdm_get_certificate(struct spdm_state *spdm_state, u8 slot)
+{
+ struct spdm_get_certificate_req req = {
+ .code = SPDM_GET_CERTIFICATE,
+ .param1 = slot,
+ };
+ struct spdm_get_certificate_rsp *rsp;
+ struct spdm_cert_chain *certs = NULL;
+ size_t rsp_sz, total_length, header_length;
+ u16 remainder_length = 0xffff;
+ u16 portion_length;
+ u16 offset = 0;
+ int rc, length;
+
+ /*
+ * It is legal for the responder to send more bytes than requested.
+ * (Note the "should" in SPDM 1.0.0 table 19.) If we allocate a
+ * too small buffer, we can't calculate the hash over the (truncated)
+ * response. Only choice is thus to allocate the maximum possible 64k.
+ */
+ rsp_sz = min_t(u32, sizeof(*rsp) + 0xffff, spdm_state->transport_sz);
+ rsp = kvmalloc(rsp_sz, GFP_KERNEL);
+ if (!rsp)
+ return -ENOMEM;
+
+ do {
+ /*
+ * If transport_sz is sufficiently large, first request will be
+ * for offset 0 and length 0xffff, which means entire cert
+ * chain (SPDM 1.0.0 table 18).
+ */
+ req.offset = cpu_to_le16(offset);
+ req.length = cpu_to_le16(min_t(size_t, remainder_length,
+ rsp_sz - sizeof(*rsp)));
+
+ rc = spdm_exchange(spdm_state, &req, sizeof(req), rsp, rsp_sz);
+ if (rc < 0)
+ goto err_free_certs;
+
+ length = rc;
+ if (length < sizeof(*rsp) ||
+ length < sizeof(*rsp) + le16_to_cpu(rsp->portion_length)) {
+ dev_err(spdm_state->dev,
+ "Truncated certificate response\n");
+ rc = -EIO;
+ goto err_free_certs;
+ }
+
+ portion_length = le16_to_cpu(rsp->portion_length);
+ remainder_length = le16_to_cpu(rsp->remainder_length);
+
+ /*
+ * On first response we learn total length of cert chain.
+ * Should portion_length + remainder_length exceed 0xffff,
+ * the min() ensures that the malformed check triggers below.
+ */
+ if (!certs) {
+ total_length = min(portion_length + remainder_length,
+ 0xffff);
+ certs = kvmalloc(total_length, GFP_KERNEL);
+ if (!certs) {
+ rc = -ENOMEM;
+ goto err_free_certs;
+ }
+ }
+
+ if (!portion_length ||
+ (rsp->param1 & 0xf) != slot ||
+ offset + portion_length + remainder_length != total_length)
+ {
+ dev_err(spdm_state->dev,
+ "Malformed certificate response\n");
+ rc = -EPROTO;
+ goto err_free_certs;
+ }
+
+ memcpy((u8 *)certs + offset, rsp->cert_chain, portion_length);
+ offset += portion_length;
+
+ rc = crypto_shash_update(spdm_state->desc, (u8 *)&req,
+ sizeof(req));
+ if (rc)
+ goto err_free_certs;
+
+ rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp,
+ sizeof(*rsp) + portion_length);
+ if (rc)
+ goto err_free_certs;
+
+ } while (remainder_length > 0);
+
+ header_length = sizeof(struct spdm_cert_chain) + spdm_state->h;
+
+ if (total_length < header_length ||
+ total_length != le16_to_cpu(certs->length)) {
+ dev_err(spdm_state->dev,
+ "Malformed certificate chain in slot %u\n", slot);
+ rc = -EPROTO;
+ goto err_free_certs;
+ }
+
+ rc = spdm_validate_cert_chain(spdm_state, slot,
+ (u8 *)certs + header_length,
+ total_length - header_length);
+
+err_free_certs:
+ kvfree(certs);
+ kvfree(rsp);
+ return rc;
+}
+
+#define SPDM_PREFIX_SZ 64 /* SPDM 1.2.0 margin no 803 */
+#define SPDM_COMBINED_PREFIX_SZ 100 /* SPDM 1.2.0 margin no 806 */
+
+/**
+ * spdm_create_combined_prefix() - Create combined_spdm_prefix for a hash
+ *
+ * @spdm_state: SPDM session state
+ * @spdm_context: SPDM context
+ * @buf: Buffer to receive combined_spdm_prefix (100 bytes)
+ *
+ * From SPDM 1.2, a hash is prefixed with the SPDM version and context before
+ * a signature is generated (or verified) over the resulting concatenation
+ * (SPDM 1.2.0 section 15). Create that prefix.
+ */
+static void spdm_create_combined_prefix(struct spdm_state *spdm_state,
+ const char *spdm_context, void *buf)
+{
+ u8 minor = spdm_state->version & 0xf;
+ u8 major = spdm_state->version >> 4;
+ size_t len = strlen(spdm_context);
+ int rc, zero_pad;
+
+ rc = snprintf(buf, SPDM_PREFIX_SZ + 1,
+ "dmtf-spdm-v%hhx.%hhx.*dmtf-spdm-v%hhx.%hhx.*"
+ "dmtf-spdm-v%hhx.%hhx.*dmtf-spdm-v%hhx.%hhx.*",
+ major, minor, major, minor, major, minor, major, minor);
+ WARN_ON(rc != SPDM_PREFIX_SZ);
+
+ zero_pad = SPDM_COMBINED_PREFIX_SZ - SPDM_PREFIX_SZ - 1 - len;
+ WARN_ON(zero_pad < 0);
+
+ memset(buf + SPDM_PREFIX_SZ + 1, 0, zero_pad);
+ memcpy(buf + SPDM_PREFIX_SZ + 1 + zero_pad, spdm_context, len);
+}
+
+/**
+ * spdm_verify_signature() - Verify signature against leaf key
+ *
+ * @spdm_state: SPDM session state
+ * @s: Signature
+ * @spdm_context: SPDM context (used to create combined_spdm_prefix)
+ *
+ * Implementation of the abstract SPDMSignatureVerify() function described in
+ * SPDM 1.2.0 section 16: Compute the hash in @spdm_state->desc and verify
+ * that its signature @s was generated with @spdm_state->leaf_key.
+ * Return 0 on success or a negative errno.
+ */
+static int spdm_verify_signature(struct spdm_state *spdm_state, u8 *s,
+ const char *spdm_context)
+{
+ struct public_key_signature sig = {
+ .s = s,
+ .s_size = spdm_state->s,
+ .encoding = spdm_state->base_asym_enc,
+ .hash_algo = spdm_state->base_hash_alg_name,
+ };
+ u8 *m, *mhash = NULL;
+ int rc;
+
+ m = kmalloc(SPDM_COMBINED_PREFIX_SZ + spdm_state->h, GFP_KERNEL);
+ if (!m)
+ return -ENOMEM;
+
+ rc = crypto_shash_final(spdm_state->desc, m + SPDM_COMBINED_PREFIX_SZ);
+ if (rc)
+ goto err_free_m;
+
+ if (spdm_state->version <= 0x11) {
+ /*
+ * Until SPDM 1.1, the signature is computed only over the hash
+ * (SPDM 1.0.0 section 4.9.2.7).
+ */
+ sig.digest = m + SPDM_COMBINED_PREFIX_SZ;
+ sig.digest_size = spdm_state->h;
+ } else {
+ /*
+ * From SPDM 1.2, the hash is prefixed with spdm_context before
+ * computing the signature over the resulting message M
+ * (SPDM 1.2.0 margin no 841).
+ */
+ spdm_create_combined_prefix(spdm_state, spdm_context, m);
+
+ /*
+ * RSA and ECDSA algorithms require that M is hashed once more.
+ * EdDSA and SM2 algorithms omit that step.
+ * The switch statement prepares for their introduction.
+ */
+ switch (spdm_state->base_asym_alg) {
+ default:
+ mhash = kmalloc(spdm_state->h, GFP_KERNEL);
+ if (!mhash) {
+ rc = -ENOMEM;
+ goto err_free_m;
+ }
+
+ rc = crypto_shash_digest(spdm_state->desc, m,
+ SPDM_COMBINED_PREFIX_SZ + spdm_state->h,
+ mhash);
+ if (rc)
+ goto err_free_mhash;
+
+ sig.digest = mhash;
+ sig.digest_size = spdm_state->h;
+ break;
+ }
+ }
+
+ rc = public_key_verify_signature(spdm_state->leaf_key, &sig);
+
+err_free_mhash:
+ kfree(mhash);
+err_free_m:
+ kfree(m);
+ return rc;
+}
+
+/**
+ * spdm_challenge_rsp_sz() - Calculate CHALLENGE_AUTH response size
+ *
+ * @spdm_state: SPDM session state
+ * @rsp: CHALLENGE_AUTH response (optional)
+ *
+ * A CHALLENGE_AUTH response contains multiple variable-length fields
+ * as well as optional fields. This helper eases calculating its size.
+ *
+ * If @rsp is %NULL, assume the maximum OpaqueDataLength of 1024 bytes
+ * (SPDM 1.0.0 table 21). Otherwise read OpaqueDataLength from @rsp.
+ * OpaqueDataLength can only be > 0 for SPDM 1.0 and 1.1, as they lack
+ * the OtherParamsSupport field in the NEGOTIATE_ALGORITHMS request.
+ * For SPDM 1.2+, we do not offer any Opaque Data Formats in that field,
+ * which forces OpaqueDataLength to 0 (SPDM 1.2.0 margin no 261).
+ */
+static size_t spdm_challenge_rsp_sz(struct spdm_state *spdm_state,
+ struct spdm_challenge_rsp *rsp)
+{
+ size_t size = sizeof(*rsp) /* Header */
+ + spdm_state->h /* CertChainHash */
+ + 32; /* Nonce */
+
+ if (rsp)
+ /* May be unaligned if hash algorithm has unusual length. */
+ size += get_unaligned_le16((u8 *)rsp + size);
+ else
+ size += SPDM_MAX_OPAQUE_DATA; /* OpaqueData */
+
+ size += 2; /* OpaqueDataLength */
+
+ if (spdm_state->version >= 0x13)
+ size += 8; /* RequesterContext */
+
+ return size + spdm_state->s; /* Signature */
+}
+
+static int spdm_challenge(struct spdm_state *spdm_state, u8 slot)
+{
+ size_t req_sz, rsp_sz, rsp_sz_max, sig_offset;
+ struct spdm_challenge_req req = {
+ .code = SPDM_CHALLENGE,
+ .param1 = slot,
+ .param2 = 0, /* no measurement summary hash */
+ };
+ struct spdm_challenge_rsp *rsp;
+ int rc, length;
+
+ get_random_bytes(&req.nonce, sizeof(req.nonce));
+
+ if (spdm_state->version <= 0x12)
+ req_sz = offsetof(typeof(req), context);
+ else
+ req_sz = sizeof(req);
+
+ rsp_sz_max = spdm_challenge_rsp_sz(spdm_state, NULL);
+ rsp = kzalloc(rsp_sz_max, GFP_KERNEL);
+ if (!rsp)
+ return -ENOMEM;
+
+ rc = spdm_exchange(spdm_state, &req, req_sz, rsp, rsp_sz_max);
+ if (rc < 0)
+ goto err_free_rsp;
+
+ length = rc;
+ rsp_sz = spdm_challenge_rsp_sz(spdm_state, rsp);
+ if (length < rsp_sz) {
+ dev_err(spdm_state->dev, "Truncated challenge_auth response\n");
+ rc = -EIO;
+ goto err_free_rsp;
+ }
+
+ /* Last step of building the hash */
+ rc = crypto_shash_update(spdm_state->desc, (u8 *)&req, req_sz);
+ if (rc)
+ goto err_free_rsp;
+
+ sig_offset = rsp_sz - spdm_state->s;
+ rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp, sig_offset);
+ if (rc)
+ goto err_free_rsp;
+
+ /* Hash is complete and signature received; verify against leaf key */
+ rc = spdm_verify_signature(spdm_state, (u8 *)rsp + sig_offset,
+ "responder-challenge_auth signing");
+ if (rc)
+ dev_err(spdm_state->dev,
+ "Failed to verify challenge_auth signature: %d\n", rc);
+
+err_free_rsp:
+ kfree(rsp);
+ return rc;
+}
+
+static void spdm_reset(struct spdm_state *spdm_state)
+{
+ public_key_free(spdm_state->leaf_key);
+ spdm_state->leaf_key = NULL;
+
+ kfree(spdm_state->desc);
+ spdm_state->desc = NULL;
+
+ crypto_free_shash(spdm_state->shash);
+ spdm_state->shash = NULL;
+}
+
+/**
+ * spdm_authenticate() - Authenticate device
+ *
+ * @spdm_state: SPDM session state
+ *
+ * Authenticate a device through a sequence of GET_VERSION, GET_CAPABILITIES,
+ * NEGOTIATE_ALGORITHMS, GET_DIGESTS, GET_CERTIFICATE and CHALLENGE exchanges.
+ *
+ * Perform internal locking to serialize multiple concurrent invocations.
+ * Can be called repeatedly for reauthentication.
+ *
+ * Return 0 on success or a negative errno. In particular, -EPROTONOSUPPORT
+ * indicates that authentication is not supported by the device.
+ */
+int spdm_authenticate(struct spdm_state *spdm_state)
+{
+ size_t transcript_sz;
+ void *transcript;
+ int rc = -ENOMEM;
+ u8 slot;
+
+ mutex_lock(&spdm_state->lock);
+ spdm_reset(spdm_state);
+
+ /*
+ * For CHALLENGE_AUTH signature verification, a hash is computed over
+ * all exchanged messages to detect modification by a man-in-the-middle
+ * or media error. However the hash algorithm is not known until the
+ * NEGOTIATE_ALGORITHMS response has been received. The preceding
+ * GET_VERSION and GET_CAPABILITIES exchanges are therefore stashed
+ * in a transcript buffer and consumed once the algorithm is known.
+ * The buffer size is sufficient for the largest possible messages with
+ * 255 version entries and the capability fields added by SPDM 1.2.
+ */
+ transcript = kzalloc(struct_size_t(struct spdm_get_version_rsp,
+ version_number_entries, 255) +
+ sizeof(struct spdm_get_capabilities_reqrsp) * 2,
+ GFP_KERNEL);
+ if (!transcript)
+ goto unlock;
+
+ rc = spdm_get_version(spdm_state, transcript, &transcript_sz);
+ if (rc)
+ goto unlock;
+
+ rc = spdm_get_capabilities(spdm_state, transcript + transcript_sz,
+ &transcript_sz);
+ if (rc)
+ goto unlock;
+
+ rc = spdm_negotiate_algs(spdm_state, transcript, transcript_sz);
+ if (rc)
+ goto unlock;
+
+ rc = spdm_get_digests(spdm_state);
+ if (rc)
+ goto unlock;
+
+ for_each_set_bit(slot, &spdm_state->slot_mask, SPDM_SLOTS) {
+ rc = spdm_get_certificate(spdm_state, slot);
+ if (rc == 0)
+ break; /* success */
+ if (rc != -ENOKEY && rc != -EKEYREJECTED)
+ break; /* try next slot only on signature error */
+ }
+ if (rc)
+ goto unlock;
+
+ rc = spdm_challenge(spdm_state, slot);
+
+unlock:
+ if (rc)
+ spdm_reset(spdm_state);
+ spdm_state->authenticated = !rc;
+ mutex_unlock(&spdm_state->lock);
+ kfree(transcript);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(spdm_authenticate);
+
+/**
+ * spdm_authenticated() - Whether device was authenticated successfully
+ *
+ * @spdm_state: SPDM session state
+ *
+ * Return true if the most recent spdm_authenticate() call was successful.
+ */
+bool spdm_authenticated(struct spdm_state *spdm_state)
+{
+ return spdm_state->authenticated;
+}
+EXPORT_SYMBOL_GPL(spdm_authenticated);
+
+/**
+ * spdm_create() - Allocate SPDM session
+ *
+ * @dev: Transport device
+ * @transport: Transport function to perform one message exchange
+ * @transport_priv: Transport private data
+ * @transport_sz: Maximum message size the transport is capable of (in bytes)
+ * @keyring: Trusted root certificates
+ *
+ * Returns a pointer to the allocated SPDM session state or NULL on error.
+ */
+struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
+ void *transport_priv, u32 transport_sz,
+ struct key *keyring)
+{
+ struct spdm_state *spdm_state = kzalloc(sizeof(*spdm_state), GFP_KERNEL);
+
+ if (!spdm_state)
+ return NULL;
+
+ spdm_state->dev = dev;
+ spdm_state->transport = transport;
+ spdm_state->transport_priv = transport_priv;
+ spdm_state->transport_sz = transport_sz;
+ spdm_state->root_keyring = keyring;
+
+ mutex_init(&spdm_state->lock);
+
+ return spdm_state;
+}
+EXPORT_SYMBOL_GPL(spdm_create);
+
+/**
+ * spdm_destroy() - Destroy SPDM session
+ *
+ * @spdm_state: SPDM session state
+ */
+void spdm_destroy(struct spdm_state *spdm_state)
+{
+ spdm_reset(spdm_state);
+ mutex_destroy(&spdm_state->lock);
+ kfree(spdm_state);
+}
+EXPORT_SYMBOL_GPL(spdm_destroy);
+
+MODULE_LICENSE("GPL");
--
2.40.1

2023-09-28 18:06:53

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 08/12] PCI/CMA: Authenticate devices on enumeration

From: Jonathan Cameron <[email protected]>

Component Measurement and Authentication (CMA, PCIe r6.1 sec 6.31)
allows for measurement and authentication of PCIe devices. It is
based on the Security Protocol and Data Model specification (SPDM,
https://www.dmtf.org/dsp/DSP0274).

CMA-SPDM in turn forms the basis for Integrity and Data Encryption
(IDE, PCIe r6.1 sec 6.33) because the key material used by IDE is
exchanged over a CMA-SPDM session.

As a first step, authenticate CMA-capable devices on enumeration.
A subsequent commit will expose the result in sysfs.

When allocating SPDM session state with spdm_create(), the maximum SPDM
message length needs to be passed. Make the PCI_DOE_MAX_LENGTH macro
public and calculate the maximum payload length from it.

Credits: Jonathan wrote a proof-of-concept of this CMA implementation.
Lukas reworked it for upstream. Wilfred contributed fixes for issues
discovered during testing.

Signed-off-by: Jonathan Cameron <[email protected]>
Signed-off-by: Wilfred Mallawa <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
---
MAINTAINERS | 1 +
drivers/pci/Kconfig | 13 ++++++
drivers/pci/Makefile | 2 +
drivers/pci/cma.c | 97 +++++++++++++++++++++++++++++++++++++++++
drivers/pci/doe.c | 3 --
drivers/pci/pci.h | 8 ++++
drivers/pci/probe.c | 1 +
drivers/pci/remove.c | 1 +
include/linux/pci-doe.h | 4 ++
include/linux/pci.h | 4 ++
10 files changed, 131 insertions(+), 3 deletions(-)
create mode 100644 drivers/pci/cma.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 2591d2217d65..70a2beb4a278 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19305,6 +19305,7 @@ M: Lukas Wunner <[email protected]>
L: [email protected]
L: [email protected]
S: Maintained
+F: drivers/pci/cma*
F: include/linux/spdm.h
F: lib/spdm*

diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
index e9ae66cc4189..c9aa5253ac1f 100644
--- a/drivers/pci/Kconfig
+++ b/drivers/pci/Kconfig
@@ -116,6 +116,19 @@ config XEN_PCIDEV_FRONTEND
config PCI_ATS
bool

+config PCI_CMA
+ bool "Component Measurement and Authentication (CMA-SPDM)"
+ select CRYPTO_ECDSA
+ select CRYPTO_RSA
+ select CRYPTO_SHA256
+ select CRYPTO_SHA512
+ select PCI_DOE
+ select SPDM_REQUESTER
+ help
+ Authenticate devices on enumeration per PCIe r6.1 sec 6.31.
+ A PCI DOE mailbox is used as transport for DMTF SPDM based
+ attestation, measurement and secure channel establishment.
+
config PCI_DOE
bool

diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
index cc8b4e01e29d..e0705b82690b 100644
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -34,6 +34,8 @@ obj-$(CONFIG_VGA_ARB) += vgaarb.o
obj-$(CONFIG_PCI_DOE) += doe.o
obj-$(CONFIG_PCI_DYNAMIC_OF_NODES) += of_property.o

+obj-$(CONFIG_PCI_CMA) += cma.o
+
# Endpoint library must be initialized before its users
obj-$(CONFIG_PCI_ENDPOINT) += endpoint/

diff --git a/drivers/pci/cma.c b/drivers/pci/cma.c
new file mode 100644
index 000000000000..06e5846325e3
--- /dev/null
+++ b/drivers/pci/cma.c
@@ -0,0 +1,97 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Component Measurement and Authentication (CMA-SPDM, PCIe r6.1 sec 6.31)
+ *
+ * Copyright (C) 2021 Huawei
+ * Jonathan Cameron <[email protected]>
+ *
+ * Copyright (C) 2022-23 Intel Corporation
+ */
+
+#define dev_fmt(fmt) "CMA: " fmt
+
+#include <linux/pci.h>
+#include <linux/pci-doe.h>
+#include <linux/pm_runtime.h>
+#include <linux/spdm.h>
+
+#include "pci.h"
+
+#define PCI_DOE_PROTOCOL_CMA 1
+
+/* Keyring that userspace can poke certs into */
+static struct key *pci_cma_keyring;
+
+static int pci_doe_transport(void *priv, struct device *dev,
+ const void *request, size_t request_sz,
+ void *response, size_t response_sz)
+{
+ struct pci_doe_mb *doe = priv;
+ int rc;
+
+ /*
+ * CMA-SPDM operation in non-D0 states is optional (PCIe r6.1
+ * sec 6.31.3). The spec does not define a way to determine
+ * if it's supported, so resume to D0 unconditionally.
+ */
+ rc = pm_runtime_resume_and_get(dev);
+ if (rc)
+ return rc;
+
+ rc = pci_doe(doe, PCI_VENDOR_ID_PCI_SIG, PCI_DOE_PROTOCOL_CMA,
+ request, request_sz, response, response_sz);
+
+ pm_runtime_put(dev);
+
+ return rc;
+}
+
+void pci_cma_init(struct pci_dev *pdev)
+{
+ struct pci_doe_mb *doe;
+ int rc;
+
+ if (!pci_cma_keyring) {
+ return;
+ }
+
+ if (!pci_is_pcie(pdev))
+ return;
+
+ doe = pci_find_doe_mailbox(pdev, PCI_VENDOR_ID_PCI_SIG,
+ PCI_DOE_PROTOCOL_CMA);
+ if (!doe)
+ return;
+
+ pdev->spdm_state = spdm_create(&pdev->dev, pci_doe_transport, doe,
+ PCI_DOE_MAX_PAYLOAD, pci_cma_keyring);
+ if (!pdev->spdm_state) {
+ return;
+ }
+
+ rc = spdm_authenticate(pdev->spdm_state);
+}
+
+void pci_cma_destroy(struct pci_dev *pdev)
+{
+ if (pdev->spdm_state)
+ spdm_destroy(pdev->spdm_state);
+}
+
+__init static int pci_cma_keyring_init(void)
+{
+ pci_cma_keyring = keyring_alloc(".cma", KUIDT_INIT(0), KGIDT_INIT(0),
+ current_cred(),
+ (KEY_POS_ALL & ~KEY_POS_SETATTR) |
+ KEY_USR_VIEW | KEY_USR_READ |
+ KEY_USR_WRITE | KEY_USR_SEARCH,
+ KEY_ALLOC_NOT_IN_QUOTA |
+ KEY_ALLOC_SET_KEEP, NULL, NULL);
+ if (IS_ERR(pci_cma_keyring)) {
+ pr_err("Could not allocate keyring\n");
+ return PTR_ERR(pci_cma_keyring);
+ }
+
+ return 0;
+}
+arch_initcall(pci_cma_keyring_init);
diff --git a/drivers/pci/doe.c b/drivers/pci/doe.c
index e3aab5edaf70..79f0336eb0c3 100644
--- a/drivers/pci/doe.c
+++ b/drivers/pci/doe.c
@@ -31,9 +31,6 @@
#define PCI_DOE_FLAG_CANCEL 0
#define PCI_DOE_FLAG_DEAD 1

-/* Max data object length is 2^18 dwords */
-#define PCI_DOE_MAX_LENGTH (1 << 18)
-
/**
* struct pci_doe_mb - State for a single DOE mailbox
*
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 39a8932dc340..bd80a0369c9c 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -322,6 +322,14 @@ static inline void pci_doe_destroy(struct pci_dev *pdev) { }
static inline void pci_doe_disconnected(struct pci_dev *pdev) { }
#endif

+#ifdef CONFIG_PCI_CMA
+void pci_cma_init(struct pci_dev *pdev);
+void pci_cma_destroy(struct pci_dev *pdev);
+#else
+static inline void pci_cma_init(struct pci_dev *pdev) { }
+static inline void pci_cma_destroy(struct pci_dev *pdev) { }
+#endif
+
/**
* pci_dev_set_io_state - Set the new error state if possible.
*
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 795534589b98..1420a8d82386 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -2487,6 +2487,7 @@ static void pci_init_capabilities(struct pci_dev *dev)
pci_dpc_init(dev); /* Downstream Port Containment */
pci_rcec_init(dev); /* Root Complex Event Collector */
pci_doe_init(dev); /* Data Object Exchange */
+ pci_cma_init(dev); /* Component Measurement & Auth */

pcie_report_downtraining(dev);
pci_init_reset_methods(dev);
diff --git a/drivers/pci/remove.c b/drivers/pci/remove.c
index d749ea8250d6..f009ac578997 100644
--- a/drivers/pci/remove.c
+++ b/drivers/pci/remove.c
@@ -39,6 +39,7 @@ static void pci_destroy_dev(struct pci_dev *dev)
list_del(&dev->bus_list);
up_write(&pci_bus_sem);

+ pci_cma_destroy(dev);
pci_doe_destroy(dev);
pcie_aspm_exit_link_state(dev);
pci_bridge_d3_update(dev);
diff --git a/include/linux/pci-doe.h b/include/linux/pci-doe.h
index 1f14aed4354b..0d3d7656c456 100644
--- a/include/linux/pci-doe.h
+++ b/include/linux/pci-doe.h
@@ -15,6 +15,10 @@

struct pci_doe_mb;

+/* Max data object length is 2^18 dwords (including 2 dwords for header) */
+#define PCI_DOE_MAX_LENGTH (1 << 18)
+#define PCI_DOE_MAX_PAYLOAD ((PCI_DOE_MAX_LENGTH - 2) * sizeof(u32))
+
struct pci_doe_mb *pci_find_doe_mailbox(struct pci_dev *pdev, u16 vendor,
u8 type);

diff --git a/include/linux/pci.h b/include/linux/pci.h
index 8c7c2c3c6c65..0c0123317df6 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -39,6 +39,7 @@
#include <linux/io.h>
#include <linux/resource_ext.h>
#include <linux/msi_api.h>
+#include <linux/spdm.h>
#include <uapi/linux/pci.h>

#include <linux/pci_ids.h>
@@ -515,6 +516,9 @@ struct pci_dev {
#endif
#ifdef CONFIG_PCI_DOE
struct xarray doe_mbs; /* Data Object Exchange mailboxes */
+#endif
+#ifdef CONFIG_PCI_CMA
+ struct spdm_state *spdm_state; /* Security Protocol and Data Model */
#endif
u16 acs_cap; /* ACS Capability offset */
phys_addr_t rom; /* Physical address if not from BAR */
--
2.40.1

2023-09-28 18:08:51

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 05/12] crypto: akcipher - Support more than one signature encoding

Currently only a single default signature encoding is supported per
akcipher.

A subsequent commit will allow a second encoding for ecdsa, namely P1363
alternatively to X9.62.

To accommodate for that, amend struct akcipher_request and struct
crypto_akcipher_sync_data to store the desired signature encoding for
verify and sign ops.

Amend akcipher_request_set_crypt(), crypto_sig_verify() and
crypto_sig_sign() with an additional parameter which specifies the
desired signature encoding. Adjust all callers.

Signed-off-by: Lukas Wunner <[email protected]>
---
crypto/akcipher.c | 2 +-
crypto/asymmetric_keys/public_key.c | 4 ++--
crypto/internal.h | 1 +
crypto/rsa-pkcs1pad.c | 11 +++++++----
crypto/sig.c | 6 ++++--
crypto/testmgr.c | 8 +++++---
crypto/testmgr.h | 1 +
include/crypto/akcipher.h | 10 +++++++++-
include/crypto/sig.h | 6 ++++--
9 files changed, 34 insertions(+), 15 deletions(-)

diff --git a/crypto/akcipher.c b/crypto/akcipher.c
index 52813f0b19e4..88501c0886d2 100644
--- a/crypto/akcipher.c
+++ b/crypto/akcipher.c
@@ -221,7 +221,7 @@ int crypto_akcipher_sync_prep(struct crypto_akcipher_sync_data *data)
sg = &data->sg;
sg_init_one(sg, buf, mlen);
akcipher_request_set_crypt(req, sg, data->dst ? sg : NULL,
- data->slen, data->dlen);
+ data->slen, data->dlen, data->enc);

crypto_init_wait(&data->cwait);
akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
index abeecb8329b3..7f96e8e501db 100644
--- a/crypto/asymmetric_keys/public_key.c
+++ b/crypto/asymmetric_keys/public_key.c
@@ -354,7 +354,7 @@ static int software_key_eds_op(struct kernel_pkey_params *params,
if (!issig)
break;
ret = crypto_sig_sign(sig, in, params->in_len,
- out, params->out_len);
+ out, params->out_len, params->encoding);
break;
default:
BUG();
@@ -438,7 +438,7 @@ int public_key_verify_signature(const struct public_key *pkey,
goto error_free_key;

ret = crypto_sig_verify(tfm, sig->s, sig->s_size,
- sig->digest, sig->digest_size);
+ sig->digest, sig->digest_size, sig->encoding);

error_free_key:
kfree_sensitive(key);
diff --git a/crypto/internal.h b/crypto/internal.h
index 63e59240d5fb..268315b13ccd 100644
--- a/crypto/internal.h
+++ b/crypto/internal.h
@@ -41,6 +41,7 @@ struct crypto_akcipher_sync_data {
void *dst;
unsigned int slen;
unsigned int dlen;
+ const char *enc;

struct akcipher_request *req;
struct crypto_wait cwait;
diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
index d2e5e104f8cf..5f9313a3b01e 100644
--- a/crypto/rsa-pkcs1pad.c
+++ b/crypto/rsa-pkcs1pad.c
@@ -262,7 +262,8 @@ static int pkcs1pad_encrypt(struct akcipher_request *req)

/* Reuse output buffer */
akcipher_request_set_crypt(&req_ctx->child_req, req_ctx->in_sg,
- req->dst, ctx->key_size - 1, req->dst_len);
+ req->dst, ctx->key_size - 1, req->dst_len,
+ NULL);

err = crypto_akcipher_encrypt(&req_ctx->child_req);
if (err != -EINPROGRESS && err != -EBUSY)
@@ -362,7 +363,7 @@ static int pkcs1pad_decrypt(struct akcipher_request *req)
/* Reuse input buffer, output to a new buffer */
akcipher_request_set_crypt(&req_ctx->child_req, req->src,
req_ctx->out_sg, req->src_len,
- ctx->key_size);
+ ctx->key_size, NULL);

err = crypto_akcipher_decrypt(&req_ctx->child_req);
if (err != -EINPROGRESS && err != -EBUSY)
@@ -419,7 +420,8 @@ static int pkcs1pad_sign(struct akcipher_request *req)

/* Reuse output buffer */
akcipher_request_set_crypt(&req_ctx->child_req, req_ctx->in_sg,
- req->dst, ctx->key_size - 1, req->dst_len);
+ req->dst, ctx->key_size - 1, req->dst_len,
+ req->enc);

err = crypto_akcipher_decrypt(&req_ctx->child_req);
if (err != -EINPROGRESS && err != -EBUSY)
@@ -551,7 +553,8 @@ static int pkcs1pad_verify(struct akcipher_request *req)

/* Reuse input buffer, output to a new buffer */
akcipher_request_set_crypt(&req_ctx->child_req, req->src,
- req_ctx->out_sg, sig_size, ctx->key_size);
+ req_ctx->out_sg, sig_size, ctx->key_size,
+ req->enc);

err = crypto_akcipher_encrypt(&req_ctx->child_req);
if (err != -EINPROGRESS && err != -EBUSY)
diff --git a/crypto/sig.c b/crypto/sig.c
index 224c47019297..4fc1a8f865e4 100644
--- a/crypto/sig.c
+++ b/crypto/sig.c
@@ -89,7 +89,7 @@ EXPORT_SYMBOL_GPL(crypto_sig_maxsize);

int crypto_sig_sign(struct crypto_sig *tfm,
const void *src, unsigned int slen,
- void *dst, unsigned int dlen)
+ void *dst, unsigned int dlen, const char *enc)
{
struct crypto_akcipher **ctx = crypto_sig_ctx(tfm);
struct crypto_akcipher_sync_data data = {
@@ -98,6 +98,7 @@ int crypto_sig_sign(struct crypto_sig *tfm,
.dst = dst,
.slen = slen,
.dlen = dlen,
+ .enc = enc,
};

return crypto_akcipher_sync_prep(&data) ?:
@@ -108,7 +109,7 @@ EXPORT_SYMBOL_GPL(crypto_sig_sign);

int crypto_sig_verify(struct crypto_sig *tfm,
const void *src, unsigned int slen,
- const void *digest, unsigned int dlen)
+ const void *digest, unsigned int dlen, const char *enc)
{
struct crypto_akcipher **ctx = crypto_sig_ctx(tfm);
struct crypto_akcipher_sync_data data = {
@@ -116,6 +117,7 @@ int crypto_sig_verify(struct crypto_sig *tfm,
.src = src,
.slen = slen,
.dlen = dlen,
+ .enc = enc,
};
int err;

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 216878c8bc3d..d5dd715673dd 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -4154,11 +4154,12 @@ static int test_akcipher_one(struct crypto_akcipher *tfm,
goto free_all;
memcpy(xbuf[1], c, c_size);
sg_set_buf(&src_tab[2], xbuf[1], c_size);
- akcipher_request_set_crypt(req, src_tab, NULL, m_size, c_size);
+ akcipher_request_set_crypt(req, src_tab, NULL, m_size, c_size,
+ vecs->enc);
} else {
sg_init_one(&dst, outbuf_enc, out_len_max);
akcipher_request_set_crypt(req, src_tab, &dst, m_size,
- out_len_max);
+ out_len_max, NULL);
}
akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
crypto_req_done, &wait);
@@ -4217,7 +4218,8 @@ static int test_akcipher_one(struct crypto_akcipher *tfm,
sg_init_one(&src, xbuf[0], c_size);
sg_init_one(&dst, outbuf_dec, out_len_max);
crypto_init_wait(&wait);
- akcipher_request_set_crypt(req, &src, &dst, c_size, out_len_max);
+ akcipher_request_set_crypt(req, &src, &dst, c_size, out_len_max,
+ vecs->enc);

err = crypto_wait_req(vecs->siggen_sigver_test ?
/* Run asymmetric signature generation */
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 5ca7a412508f..ad57e7af2e14 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -153,6 +153,7 @@ struct akcipher_testvec {
const unsigned char *params;
const unsigned char *m;
const unsigned char *c;
+ const char *enc;
unsigned int key_len;
unsigned int param_len;
unsigned int m_size;
diff --git a/include/crypto/akcipher.h b/include/crypto/akcipher.h
index 670508f1dca1..00bbec69af3b 100644
--- a/include/crypto/akcipher.h
+++ b/include/crypto/akcipher.h
@@ -30,6 +30,8 @@
* In case of error where the dst sgl size was insufficient,
* it will be updated to the size required for the operation.
* For verify op this is size of digest part in @src.
+ * @enc: For verify op it's the encoding of the signature part of @src.
+ * For sign op it's the encoding of the signature in @dst.
* @__ctx: Start of private context data
*/
struct akcipher_request {
@@ -38,6 +40,7 @@ struct akcipher_request {
struct scatterlist *dst;
unsigned int src_len;
unsigned int dst_len;
+ const char *enc;
void *__ctx[] CRYPTO_MINALIGN_ATTR;
};

@@ -272,17 +275,22 @@ static inline void akcipher_request_set_callback(struct akcipher_request *req,
* @src_len: size of the src input scatter list to be processed
* @dst_len: size of the dst output scatter list or size of signature
* portion in @src for verify op
+ * @enc: encoding of signature portion in @src for verify op,
+ * encoding of signature in @dst for sign op,
+ * NULL for encrypt and decrypt op
*/
static inline void akcipher_request_set_crypt(struct akcipher_request *req,
struct scatterlist *src,
struct scatterlist *dst,
unsigned int src_len,
- unsigned int dst_len)
+ unsigned int dst_len,
+ const char *enc)
{
req->src = src;
req->dst = dst;
req->src_len = src_len;
req->dst_len = dst_len;
+ req->enc = enc;
}

/**
diff --git a/include/crypto/sig.h b/include/crypto/sig.h
index 641b4714c448..1df18005c854 100644
--- a/include/crypto/sig.h
+++ b/include/crypto/sig.h
@@ -81,12 +81,13 @@ int crypto_sig_maxsize(struct crypto_sig *tfm);
* @slen: source length
* @dst: destinatino obuffer
* @dlen: destination length
+ * @enc: signature encoding
*
* Return: zero on success; error code in case of error
*/
int crypto_sig_sign(struct crypto_sig *tfm,
const void *src, unsigned int slen,
- void *dst, unsigned int dlen);
+ void *dst, unsigned int dlen, const char *enc);

/**
* crypto_sig_verify() - Invoke signature verification
@@ -99,12 +100,13 @@ int crypto_sig_sign(struct crypto_sig *tfm,
* @slen: source length
* @digest: digest
* @dlen: digest length
+ * @enc: signature encoding
*
* Return: zero on verification success; error code in case of error.
*/
int crypto_sig_verify(struct crypto_sig *tfm,
const void *src, unsigned int slen,
- const void *digest, unsigned int dlen);
+ const void *digest, unsigned int dlen, const char *enc);

/**
* crypto_sig_set_pubkey() - Invoke set public key operation
--
2.40.1

2023-09-28 18:11:30

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 06/12] crypto: ecdsa - Support P1363 signature encoding

Alternatively to the X9.62 encoding of ecdsa signatures, which uses
ASN.1 and is already supported by the kernel, there's another common
encoding called P1363. It stores r and s as the concatenation of two
big endian, unsigned integers. The name originates from IEEE P1363.

The Security Protocol and Data Model (SPDM) specification prescribes
that ecdsa signatures are encoded according to P1363:

"For ECDSA signatures, excluding SM2, in SPDM, the signature shall be
the concatenation of r and s. The size of r shall be the size of
the selected curve. Likewise, the size of s shall be the size of
the selected curve. See BaseAsymAlgo in NEGOTIATE_ALGORITHMS for
the size of r and s. The byte order for r and s shall be in big
endian order. When placing ECDSA signatures into an SPDM signature
field, r shall come first followed by s."

(SPDM 1.2.1 margin no 44,
https://www.dmtf.org/sites/default/files/standards/documents/DSP0274_1.2.1.pdf)

A subsequent commit introduces an SPDM library to enable PCI device
authentication, so add support for P1363 ecdsa signature verification.

Signed-off-by: Lukas Wunner <[email protected]>
---
crypto/asymmetric_keys/public_key.c | 8 ++++++--
crypto/ecdsa.c | 16 +++++++++++++---
crypto/testmgr.h | 15 +++++++++++++++
3 files changed, 34 insertions(+), 5 deletions(-)

diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
index 7f96e8e501db..84c4ed02a270 100644
--- a/crypto/asymmetric_keys/public_key.c
+++ b/crypto/asymmetric_keys/public_key.c
@@ -105,7 +105,8 @@ software_key_determine_akcipher(const struct public_key *pkey,
return -EINVAL;
*sig = false;
} else if (strncmp(pkey->pkey_algo, "ecdsa", 5) == 0) {
- if (strcmp(encoding, "x962") != 0)
+ if (strcmp(encoding, "x962") != 0 &&
+ strcmp(encoding, "p1363") != 0)
return -EINVAL;
/*
* ECDSA signatures are taken over a raw hash, so they don't
@@ -246,7 +247,10 @@ static int software_key_query(const struct kernel_pkey_params *params,
* which is actually 2 'key_size'-bit integers encoded in
* ASN.1. Account for the ASN.1 encoding overhead here.
*/
- info->max_sig_size = 2 * (len + 3) + 2;
+ if (strcmp(params->encoding, "x962") == 0)
+ info->max_sig_size = 2 * (len + 3) + 2;
+ else if (strcmp(params->encoding, "p1363") == 0)
+ info->max_sig_size = 2 * len;
} else {
info->max_data_size = len;
info->max_sig_size = len;
diff --git a/crypto/ecdsa.c b/crypto/ecdsa.c
index fbd76498aba8..cc3082c6f67d 100644
--- a/crypto/ecdsa.c
+++ b/crypto/ecdsa.c
@@ -159,10 +159,20 @@ static int ecdsa_verify(struct akcipher_request *req)
sg_nents_for_len(req->src, req->src_len + req->dst_len),
buffer, req->src_len + req->dst_len, 0);

- ret = asn1_ber_decoder(&ecdsasignature_decoder, &sig_ctx,
- buffer, req->src_len);
- if (ret < 0)
+ if (strcmp(req->enc, "x962") == 0) {
+ ret = asn1_ber_decoder(&ecdsasignature_decoder, &sig_ctx,
+ buffer, req->src_len);
+ if (ret < 0)
+ goto error;
+ } else if (strcmp(req->enc, "p1363") == 0 &&
+ req->src_len == 2 * keylen) {
+ ecc_swap_digits(buffer, sig_ctx.r, ctx->curve->g.ndigits);
+ ecc_swap_digits(buffer + keylen,
+ sig_ctx.s, ctx->curve->g.ndigits);
+ } else {
+ ret = -EINVAL;
goto error;
+ }

/* if the hash is shorter then we will add leading zeros to fit to ndigits */
diff = keylen - req->dst_len;
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index ad57e7af2e14..f12f70818147 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -674,6 +674,7 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
"\x68\x01\x9d\xba\xce\x83\x08\xef\x95\x52\x7b\xa0\x0f\xe4\x18\x86"
"\x80\x6f\xa5\x79\x77\xda\xd0",
.c_size = 55,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
@@ -698,6 +699,7 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
"\x4f\x53\x75\xc8\x02\x48\xeb\xc3\x92\x0f\x1e\x72\xee\xc4\xa3\xe3"
"\x5c\x99\xdb\x92\x5b\x36",
.c_size = 54,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
@@ -722,6 +724,7 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
"\x69\x43\xfd\x48\x19\x86\xcf\x32\xdd\x41\x74\x6a\x51\xc7\xd9\x7d"
"\x3a\x97\xd9\xcd\x1a\x6a\x49",
.c_size = 55,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
@@ -747,6 +750,7 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
"\xbc\x5a\x1f\x82\x96\x61\xd7\xd1\x01\x77\x44\x5d\x53\xa4\x7c\x93"
"\x12\x3b\x3b\x28\xfb\x6d\xe1",
.c_size = 55,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
@@ -773,6 +777,7 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
"\xb4\x22\x9a\x98\x73\x3c\x83\xa9\x14\x2a\x5e\xf5\xe5\xfb\x72\x28"
"\x6a\xdf\x97\xfd\x82\x76\x24",
.c_size = 55,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
},
@@ -803,6 +808,7 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
"\x8a\xfa\x54\x93\x29\xa7\x70\x86\xf1\x03\x03\xf3\x3b\xe2\x73\xf7"
"\xfb\x9d\x8b\xde\xd4\x8d\x6f\xad",
.c_size = 72,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
@@ -829,6 +835,7 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
"\x4a\x77\x22\xec\xc8\x66\xbf\x50\x05\x58\x39\x0e\x26\x92\xce\xd5"
"\x2e\x8b\xde\x5a\x04\x0e",
.c_size = 70,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
@@ -855,6 +862,7 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
"\xa9\x81\xac\x4a\x50\xd0\x91\x0a\x6e\x1b\xc4\xaf\xe1\x83\xc3\x4f"
"\x2a\x65\x35\x23\xe3\x1d\xfa",
.c_size = 71,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
@@ -882,6 +890,7 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
"\x19\xfb\x5f\x92\xf4\xc9\x23\x37\x69\xf4\x3b\x4f\x47\xcf\x9b\x16"
"\xc0\x60\x11\x92\xdc\x17\x89\x12",
.c_size = 72,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
@@ -910,6 +919,7 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
"\x00\xdd\xab\xd4\xc0\x2b\xe6\x5c\xad\xc3\x78\x1c\xc2\xc1\x19\x76"
"\x31\x79\x4a\xe9\x81\x6a\xee",
.c_size = 71,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
},
@@ -944,6 +954,7 @@ static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
"\x74\xa0\x0f\xbf\xaf\xc3\x36\x76\x4a\xa1\x59\xf1\x1c\xa4\x58\x26"
"\x79\x12\x2a\xb7\xc5\x15\x92\xc5",
.c_size = 104,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
@@ -974,6 +985,7 @@ static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
"\x4d\xd0\xc6\x6e\xb0\xe9\xfc\x14\x9f\x19\xd0\x42\x8b\x93\xc2\x11"
"\x88\x2b\x82\x26\x5e\x1c\xda\xfb",
.c_size = 104,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
@@ -1004,6 +1016,7 @@ static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
"\xc0\x75\x3e\x23\x5e\x36\x4f\x8d\xde\x1e\x93\x8d\x95\xbb\x10\x0e"
"\xf4\x1f\x39\xca\x4d\x43",
.c_size = 102,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
@@ -1035,6 +1048,7 @@ static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
"\x44\x92\x8c\x86\x99\x65\xb3\x97\x96\x17\x04\xc9\x05\x77\xf1\x8e"
"\xab\x8d\x4e\xde\xe6\x6d\x9b\x66",
.c_size = 104,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
@@ -1067,6 +1081,7 @@ static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
"\x5f\x8d\x7a\xf9\xfb\x34\xe4\x8b\x80\xa5\xb6\xda\x2c\x4e\x45\xcf"
"\x3c\x93\xff\x50\x5d",
.c_size = 101,
+ .enc = "x962",
.public_key_vec = true,
.siggen_sigver_test = true,
},
--
2.40.1

2023-09-28 18:25:46

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 11/12] PCI/CMA: Expose in sysfs whether devices are authenticated

The PCI core has just been amended to authenticate CMA-capable devices
on enumeration and store the result in an "authenticated" bit in struct
pci_dev->spdm_state.

Expose the bit to user space through an eponymous sysfs attribute.

Allow user space to trigger reauthentication (e.g. after it has updated
the CMA keyring) by writing to the sysfs attribute.

Subject to further discussion, a future commit might add a user-defined
policy to forbid driver binding to devices which failed authentication,
similar to the "authorized" attribute for USB.

Alternatively, authentication success might be signaled to user space
through a uevent, whereupon it may bind a (blacklisted) driver.
A uevent signaling authentication failure might similarly cause user
space to unbind or outright remove the potentially malicious device.

Traffic from devices which failed authentication could also be filtered
through ACS I/O Request Blocking Enable (PCIe r6.1 sec 7.7.11.3) or
through Link Disable (PCIe r6.1 sec 7.5.3.7). Unlike an IOMMU, that
will not only protect the host, but also prevent malicious peer-to-peer
traffic to other devices.

Signed-off-by: Lukas Wunner <[email protected]>
---
Documentation/ABI/testing/sysfs-bus-pci | 27 +++++++++
drivers/pci/Kconfig | 3 +
drivers/pci/Makefile | 1 +
drivers/pci/cma-sysfs.c | 73 +++++++++++++++++++++++++
drivers/pci/cma.c | 2 +
drivers/pci/doe.c | 2 +
drivers/pci/pci-sysfs.c | 3 +
drivers/pci/pci.h | 1 +
include/linux/pci.h | 2 +
9 files changed, 114 insertions(+)
create mode 100644 drivers/pci/cma-sysfs.c

diff --git a/Documentation/ABI/testing/sysfs-bus-pci b/Documentation/ABI/testing/sysfs-bus-pci
index ecf47559f495..2ea9b8deffcc 100644
--- a/Documentation/ABI/testing/sysfs-bus-pci
+++ b/Documentation/ABI/testing/sysfs-bus-pci
@@ -500,3 +500,30 @@ Description:
console drivers from the device. Raw users of pci-sysfs
resourceN attributes must be terminated prior to resizing.
Success of the resizing operation is not guaranteed.
+
+What: /sys/bus/pci/devices/.../authenticated
+Date: September 2023
+Contact: Lukas Wunner <[email protected]>
+Description:
+ This file contains 1 if the device authenticated successfully
+ with CMA-SPDM (PCIe r6.1 sec 6.31). It contains 0 if the
+ device failed authentication (and may thus be malicious).
+
+ Writing anything to this file causes reauthentication.
+ That may be opportune after updating the .cma keyring.
+
+ The file is not visible if authentication is unsupported
+ by the device.
+
+ If the kernel could not determine whether authentication is
+ supported because memory was low or DOE communication with
+ the device was not working, the file is visible but accessing
+ it fails with error code ENOTTY.
+
+ This prevents downgrade attacks where an attacker consumes
+ memory or disturbs DOE communication in order to create the
+ appearance that a device does not support authentication.
+
+ The reason why authentication support could not be determined
+ is apparent from "dmesg". To probe for authentication support
+ again, exercise the "remove" and "rescan" attributes.
diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
index c9aa5253ac1f..51df3be3438e 100644
--- a/drivers/pci/Kconfig
+++ b/drivers/pci/Kconfig
@@ -129,6 +129,9 @@ config PCI_CMA
A PCI DOE mailbox is used as transport for DMTF SPDM based
attestation, measurement and secure channel establishment.

+config PCI_CMA_SYSFS
+ def_bool PCI_CMA && SYSFS
+
config PCI_DOE
bool

diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
index a18812b8832b..612ae724cd2d 100644
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -35,6 +35,7 @@ obj-$(CONFIG_PCI_DOE) += doe.o
obj-$(CONFIG_PCI_DYNAMIC_OF_NODES) += of_property.o

obj-$(CONFIG_PCI_CMA) += cma.o cma-x509.o cma.asn1.o
+obj-$(CONFIG_PCI_CMA_SYSFS) += cma-sysfs.o
$(obj)/cma-x509.o: $(obj)/cma.asn1.h
$(obj)/cma.asn1.o: $(obj)/cma.asn1.c $(obj)/cma.asn1.h

diff --git a/drivers/pci/cma-sysfs.c b/drivers/pci/cma-sysfs.c
new file mode 100644
index 000000000000..b2d45f96601a
--- /dev/null
+++ b/drivers/pci/cma-sysfs.c
@@ -0,0 +1,73 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Component Measurement and Authentication (CMA-SPDM, PCIe r6.1 sec 6.31)
+ *
+ * Copyright (C) 2023 Intel Corporation
+ */
+
+#include <linux/pci.h>
+#include <linux/spdm.h>
+#include <linux/sysfs.h>
+
+#include "pci.h"
+
+static ssize_t authenticated_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ ssize_t rc;
+
+ if (!pdev->cma_capable &&
+ (pdev->cma_init_failed || pdev->doe_init_failed))
+ return -ENOTTY;
+
+ rc = pci_cma_reauthenticate(pdev);
+ if (rc)
+ return rc;
+
+ return count;
+}
+
+static ssize_t authenticated_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ if (!pdev->cma_capable &&
+ (pdev->cma_init_failed || pdev->doe_init_failed))
+ return -ENOTTY;
+
+ return sysfs_emit(buf, "%u\n", spdm_authenticated(pdev->spdm_state));
+}
+static DEVICE_ATTR_RW(authenticated);
+
+static struct attribute *pci_cma_attrs[] = {
+ &dev_attr_authenticated.attr,
+ NULL
+};
+
+static umode_t pci_cma_attrs_are_visible(struct kobject *kobj,
+ struct attribute *a, int n)
+{
+ struct device *dev = kobj_to_dev(kobj);
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ /*
+ * If CMA or DOE initialization failed, CMA attributes must be visible
+ * and return an error on access. This prevents downgrade attacks
+ * where an attacker disturbs memory allocation or DOE communication
+ * in order to create the appearance that CMA is unsupported.
+ * The attacker may achieve that by simply hogging memory.
+ */
+ if (!pdev->cma_capable &&
+ !pdev->cma_init_failed && !pdev->doe_init_failed)
+ return 0;
+
+ return a->mode;
+}
+
+const struct attribute_group pci_cma_attr_group = {
+ .attrs = pci_cma_attrs,
+ .is_visible = pci_cma_attrs_are_visible,
+};
diff --git a/drivers/pci/cma.c b/drivers/pci/cma.c
index 89d23fdc37ec..c539ad85a28f 100644
--- a/drivers/pci/cma.c
+++ b/drivers/pci/cma.c
@@ -52,6 +52,7 @@ void pci_cma_init(struct pci_dev *pdev)
int rc;

if (!pci_cma_keyring) {
+ pdev->cma_init_failed = true;
return;
}

@@ -67,6 +68,7 @@ void pci_cma_init(struct pci_dev *pdev)
PCI_DOE_MAX_PAYLOAD, pci_cma_keyring,
pci_cma_validate);
if (!pdev->spdm_state) {
+ pdev->cma_init_failed = true;
return;
}

diff --git a/drivers/pci/doe.c b/drivers/pci/doe.c
index 79f0336eb0c3..fabbda68edac 100644
--- a/drivers/pci/doe.c
+++ b/drivers/pci/doe.c
@@ -686,6 +686,7 @@ void pci_doe_init(struct pci_dev *pdev)
PCI_EXT_CAP_ID_DOE))) {
doe_mb = pci_doe_create_mb(pdev, offset);
if (IS_ERR(doe_mb)) {
+ pdev->doe_init_failed = true;
pci_err(pdev, "[%x] failed to create mailbox: %ld\n",
offset, PTR_ERR(doe_mb));
continue;
@@ -693,6 +694,7 @@ void pci_doe_init(struct pci_dev *pdev)

rc = xa_insert(&pdev->doe_mbs, offset, doe_mb, GFP_KERNEL);
if (rc) {
+ pdev->doe_init_failed = true;
pci_err(pdev, "[%x] failed to insert mailbox: %d\n",
offset, rc);
pci_doe_destroy_mb(doe_mb);
diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
index d9eede2dbc0e..7024e08e1b9a 100644
--- a/drivers/pci/pci-sysfs.c
+++ b/drivers/pci/pci-sysfs.c
@@ -1655,6 +1655,9 @@ static const struct attribute_group *pci_dev_attr_groups[] = {
#endif
#ifdef CONFIG_PCIEASPM
&aspm_ctrl_attr_group,
+#endif
+#ifdef CONFIG_PCI_CMA_SYSFS
+ &pci_cma_attr_group,
#endif
NULL,
};
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 71092ccf4fbd..d80cc06be0cc 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -328,6 +328,7 @@ void pci_cma_destroy(struct pci_dev *pdev);
int pci_cma_reauthenticate(struct pci_dev *pdev);
struct x509_certificate;
int pci_cma_validate(struct device *dev, struct x509_certificate *leaf_cert);
+extern const struct attribute_group pci_cma_attr_group;
#else
static inline void pci_cma_init(struct pci_dev *pdev) { }
static inline void pci_cma_destroy(struct pci_dev *pdev) { }
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 2bc11d8b567e..2c5fde81bb85 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -516,10 +516,12 @@ struct pci_dev {
#endif
#ifdef CONFIG_PCI_DOE
struct xarray doe_mbs; /* Data Object Exchange mailboxes */
+ unsigned int doe_init_failed:1;
#endif
#ifdef CONFIG_PCI_CMA
struct spdm_state *spdm_state; /* Security Protocol and Data Model */
unsigned int cma_capable:1; /* Authentication supported */
+ unsigned int cma_init_failed:1;
#endif
u16 acs_cap; /* ACS Capability offset */
phys_addr_t rom; /* Physical address if not from BAR */
--
2.40.1

2023-09-28 18:55:25

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 03/12] X.509: Move certificate length retrieval into new helper

The upcoming in-kernel SPDM library (Security Protocol and Data Model,
https://www.dmtf.org/dsp/DSP0274) needs to retrieve the length from
ASN.1 DER-encoded X.509 certificates.

Such code already exists in x509_load_certificate_list(), so move it
into a new helper for reuse by SPDM.

No functional change intended.

Signed-off-by: Lukas Wunner <[email protected]>
---
crypto/asymmetric_keys/x509_loader.c | 38 +++++++++++++++++++---------
include/keys/asymmetric-type.h | 2 ++
2 files changed, 28 insertions(+), 12 deletions(-)

diff --git a/crypto/asymmetric_keys/x509_loader.c b/crypto/asymmetric_keys/x509_loader.c
index a41741326998..121460a0de46 100644
--- a/crypto/asymmetric_keys/x509_loader.c
+++ b/crypto/asymmetric_keys/x509_loader.c
@@ -4,28 +4,42 @@
#include <linux/key.h>
#include <keys/asymmetric-type.h>

+int x509_get_certificate_length(const u8 *p, unsigned long buflen)
+{
+ int plen;
+
+ /* Each cert begins with an ASN.1 SEQUENCE tag and must be more
+ * than 256 bytes in size.
+ */
+ if (buflen < 4)
+ return -EINVAL;
+
+ if (p[0] != 0x30 &&
+ p[1] != 0x82)
+ return -EINVAL;
+
+ plen = (p[2] << 8) | p[3];
+ plen += 4;
+ if (plen > buflen)
+ return -EINVAL;
+
+ return plen;
+}
+EXPORT_SYMBOL_GPL(x509_get_certificate_length);
+
int x509_load_certificate_list(const u8 cert_list[],
const unsigned long list_size,
const struct key *keyring)
{
key_ref_t key;
const u8 *p, *end;
- size_t plen;
+ int plen;

p = cert_list;
end = p + list_size;
while (p < end) {
- /* Each cert begins with an ASN.1 SEQUENCE tag and must be more
- * than 256 bytes in size.
- */
- if (end - p < 4)
- goto dodgy_cert;
- if (p[0] != 0x30 &&
- p[1] != 0x82)
- goto dodgy_cert;
- plen = (p[2] << 8) | p[3];
- plen += 4;
- if (plen > end - p)
+ plen = x509_get_certificate_length(p, end - p);
+ if (plen < 0)
goto dodgy_cert;

key = key_create_or_update(make_key_ref(keyring, 1),
diff --git a/include/keys/asymmetric-type.h b/include/keys/asymmetric-type.h
index 69a13e1e5b2e..6705cfde25b9 100644
--- a/include/keys/asymmetric-type.h
+++ b/include/keys/asymmetric-type.h
@@ -84,6 +84,8 @@ extern struct key *find_asymmetric_key(struct key *keyring,
const struct asymmetric_key_id *id_2,
bool partial);

+int x509_get_certificate_length(const u8 *p, unsigned long buflen);
+
int x509_load_certificate_list(const u8 cert_list[], const unsigned long list_size,
const struct key *keyring);

--
2.40.1

2023-09-28 19:00:16

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 09/12] PCI/CMA: Validate Subject Alternative Name in certificates

PCIe r6.1 sec 6.31.3 stipulates requirements for X.509 Leaf Certificates
presented by devices, in particular the presence of a Subject Alternative
Name extension with a name that encodes the Vendor ID, Device ID, Device
Serial Number, etc.

This prevents a mismatch between the device identity in Config Space and
the certificate. A device cannot misappropriate a certificate from a
different device without also spoofing Config Space. As a corollary,
it cannot dupe an arbitrary driver into binding to it. (Only those
which bind to the device identity in the Subject Alternative Name work.)

Parse the Subject Alternative Name using a small ASN.1 module and
validate its contents. The theory of operation is explained in a code
comment at the top of the newly added cma-x509.c.

This functionality is introduced in a separate commit on top of basic
CMA-SPDM support to split the code into digestible, reviewable chunks.

The CMA OID added here is taken from the official OID Repository
(it's not documented in the PCIe Base Spec):
https://oid-rep.orange-labs.fr/get/2.23.147

Signed-off-by: Lukas Wunner <[email protected]>
---
drivers/pci/Makefile | 4 +-
drivers/pci/cma-x509.c | 119 +++++++++++++++++++++++++++++++++++
drivers/pci/cma.asn1 | 36 +++++++++++
drivers/pci/cma.c | 3 +-
drivers/pci/pci.h | 2 +
include/linux/oid_registry.h | 3 +
include/linux/spdm.h | 6 +-
lib/spdm_requester.c | 14 ++++-
8 files changed, 183 insertions(+), 4 deletions(-)
create mode 100644 drivers/pci/cma-x509.c
create mode 100644 drivers/pci/cma.asn1

diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
index e0705b82690b..a18812b8832b 100644
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -34,7 +34,9 @@ obj-$(CONFIG_VGA_ARB) += vgaarb.o
obj-$(CONFIG_PCI_DOE) += doe.o
obj-$(CONFIG_PCI_DYNAMIC_OF_NODES) += of_property.o

-obj-$(CONFIG_PCI_CMA) += cma.o
+obj-$(CONFIG_PCI_CMA) += cma.o cma-x509.o cma.asn1.o
+$(obj)/cma-x509.o: $(obj)/cma.asn1.h
+$(obj)/cma.asn1.o: $(obj)/cma.asn1.c $(obj)/cma.asn1.h

# Endpoint library must be initialized before its users
obj-$(CONFIG_PCI_ENDPOINT) += endpoint/
diff --git a/drivers/pci/cma-x509.c b/drivers/pci/cma-x509.c
new file mode 100644
index 000000000000..614590303b38
--- /dev/null
+++ b/drivers/pci/cma-x509.c
@@ -0,0 +1,119 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Component Measurement and Authentication (CMA-SPDM, PCIe r6.1 sec 6.31)
+ *
+ * The spdm_requester.c library calls pci_cma_validate() to check requirements
+ * for X.509 Leaf Certificates per PCIe r6.1 sec 6.31.3.
+ *
+ * It parses the Subject Alternative Name using the ASN.1 module cma.asn1,
+ * which calls pci_cma_note_oid() and pci_cma_note_san() to compare an
+ * OtherName against the expected name.
+ *
+ * The expected name is constructed beforehand by pci_cma_construct_san().
+ *
+ * Copyright (C) 2023 Intel Corporation
+ */
+
+#define dev_fmt(fmt) "CMA: " fmt
+
+#include <keys/x509-parser.h>
+#include <linux/asn1_decoder.h>
+#include <linux/oid_registry.h>
+#include <linux/pci.h>
+
+#include "cma.asn1.h"
+#include "pci.h"
+
+#define CMA_NAME_MAX sizeof("Vendor=1234:Device=1234:CC=123456:" \
+ "REV=12:SSVID=1234:SSID=1234:1234567890123456")
+
+struct pci_cma_x509_context {
+ struct pci_dev *pdev;
+ enum OID last_oid;
+ char expected_name[CMA_NAME_MAX];
+ unsigned int expected_len;
+ unsigned int found:1;
+};
+
+int pci_cma_note_oid(void *context, size_t hdrlen, unsigned char tag,
+ const void *value, size_t vlen)
+{
+ struct pci_cma_x509_context *ctx = context;
+
+ ctx->last_oid = look_up_OID(value, vlen);
+
+ return 0;
+}
+
+int pci_cma_note_san(void *context, size_t hdrlen, unsigned char tag,
+ const void *value, size_t vlen)
+{
+ struct pci_cma_x509_context *ctx = context;
+
+ /* These aren't the drOIDs we're looking for. */
+ if (ctx->last_oid != OID_CMA)
+ return 0;
+
+ if (tag != ASN1_UTF8STR ||
+ vlen != ctx->expected_len ||
+ memcmp(value, ctx->expected_name, vlen) != 0) {
+ pci_err(ctx->pdev, "Invalid X.509 Subject Alternative Name\n");
+ return -EINVAL;
+ }
+
+ ctx->found = true;
+
+ return 0;
+}
+
+static unsigned int pci_cma_construct_san(struct pci_dev *pdev, char *name)
+{
+ unsigned int len;
+ u64 serial;
+
+ len = snprintf(name, CMA_NAME_MAX,
+ "Vendor=%04hx:Device=%04hx:CC=%06x:REV=%02hhx",
+ pdev->vendor, pdev->device, pdev->class, pdev->revision);
+
+ if (pdev->hdr_type == PCI_HEADER_TYPE_NORMAL)
+ len += snprintf(name + len, CMA_NAME_MAX - len,
+ ":SSVID=%04hx:SSID=%04hx",
+ pdev->subsystem_vendor, pdev->subsystem_device);
+
+ serial = pci_get_dsn(pdev);
+ if (serial)
+ len += snprintf(name + len, CMA_NAME_MAX - len,
+ ":%016llx", serial);
+
+ return len;
+}
+
+int pci_cma_validate(struct device *dev, struct x509_certificate *leaf_cert)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct pci_cma_x509_context ctx;
+ int ret;
+
+ if (!leaf_cert->raw_san) {
+ pci_err(pdev, "Missing X.509 Subject Alternative Name\n");
+ return -EINVAL;
+ }
+
+ ctx.pdev = pdev;
+ ctx.found = false;
+ ctx.expected_len = pci_cma_construct_san(pdev, ctx.expected_name);
+
+ ret = asn1_ber_decoder(&cma_decoder, &ctx, leaf_cert->raw_san,
+ leaf_cert->raw_san_size);
+ if (ret == -EBADMSG || ret == -EMSGSIZE)
+ pci_err(pdev, "Malformed X.509 Subject Alternative Name\n");
+ if (ret < 0)
+ return ret;
+
+ if (!ctx.found) {
+ pci_err(pdev, "Missing X.509 OtherName with CMA OID\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
diff --git a/drivers/pci/cma.asn1 b/drivers/pci/cma.asn1
new file mode 100644
index 000000000000..10f90e107009
--- /dev/null
+++ b/drivers/pci/cma.asn1
@@ -0,0 +1,36 @@
+-- Component Measurement and Authentication (CMA-SPDM, PCIe r6.1 sec 6.31.3)
+-- X.509 Subject Alternative Name (RFC 5280 sec 4.2.1.6)
+--
+-- https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.6
+--
+-- The ASN.1 module in RFC 5280 appendix A.1 uses EXPLICIT TAGS whereas the one
+-- in appendix A.2 uses IMPLICIT TAGS. The kernel's simplified asn1_compiler.c
+-- always uses EXPLICIT TAGS, hence this ASN.1 module differs from RFC 5280 in
+-- that it adds IMPLICIT to definitions from appendix A.2 (such as OtherName)
+-- and omits EXPLICIT in those definitions.
+
+SubjectAltName ::= GeneralNames
+
+GeneralNames ::= SEQUENCE OF GeneralName
+
+GeneralName ::= CHOICE {
+ otherName [0] IMPLICIT OtherName,
+ rfc822Name [1] IMPLICIT IA5String,
+ dNSName [2] IMPLICIT IA5String,
+ x400Address [3] ANY,
+ directoryName [4] ANY,
+ ediPartyName [5] IMPLICIT EDIPartyName,
+ uniformResourceIdentifier [6] IMPLICIT IA5String,
+ iPAddress [7] IMPLICIT OCTET STRING,
+ registeredID [8] IMPLICIT OBJECT IDENTIFIER
+ }
+
+OtherName ::= SEQUENCE {
+ type-id OBJECT IDENTIFIER ({ pci_cma_note_oid }),
+ value [0] ANY ({ pci_cma_note_san })
+ }
+
+EDIPartyName ::= SEQUENCE {
+ nameAssigner [0] ANY OPTIONAL,
+ partyName [1] ANY
+ }
diff --git a/drivers/pci/cma.c b/drivers/pci/cma.c
index 06e5846325e3..012190c54ab6 100644
--- a/drivers/pci/cma.c
+++ b/drivers/pci/cma.c
@@ -64,7 +64,8 @@ void pci_cma_init(struct pci_dev *pdev)
return;

pdev->spdm_state = spdm_create(&pdev->dev, pci_doe_transport, doe,
- PCI_DOE_MAX_PAYLOAD, pci_cma_keyring);
+ PCI_DOE_MAX_PAYLOAD, pci_cma_keyring,
+ pci_cma_validate);
if (!pdev->spdm_state) {
return;
}
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index bd80a0369c9c..6c4755a2c91c 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -325,6 +325,8 @@ static inline void pci_doe_disconnected(struct pci_dev *pdev) { }
#ifdef CONFIG_PCI_CMA
void pci_cma_init(struct pci_dev *pdev);
void pci_cma_destroy(struct pci_dev *pdev);
+struct x509_certificate;
+int pci_cma_validate(struct device *dev, struct x509_certificate *leaf_cert);
#else
static inline void pci_cma_init(struct pci_dev *pdev) { }
static inline void pci_cma_destroy(struct pci_dev *pdev) { }
diff --git a/include/linux/oid_registry.h b/include/linux/oid_registry.h
index f86a08ba0207..cafec7111473 100644
--- a/include/linux/oid_registry.h
+++ b/include/linux/oid_registry.h
@@ -141,6 +141,9 @@ enum OID {
OID_TPMImportableKey, /* 2.23.133.10.1.4 */
OID_TPMSealedData, /* 2.23.133.10.1.5 */

+ /* PCI */
+ OID_CMA, /* 2.23.147 */
+
OID__NR
};

diff --git a/include/linux/spdm.h b/include/linux/spdm.h
index e824063793a7..69a83bc2eb41 100644
--- a/include/linux/spdm.h
+++ b/include/linux/spdm.h
@@ -17,14 +17,18 @@
struct key;
struct device;
struct spdm_state;
+struct x509_certificate;

typedef int (spdm_transport)(void *priv, struct device *dev,
const void *request, size_t request_sz,
void *response, size_t response_sz);

+typedef int (spdm_validate)(struct device *dev,
+ struct x509_certificate *leaf_cert);
+
struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
void *transport_priv, u32 transport_sz,
- struct key *keyring);
+ struct key *keyring, spdm_validate *validate);

int spdm_authenticate(struct spdm_state *spdm_state);

diff --git a/lib/spdm_requester.c b/lib/spdm_requester.c
index 407041036599..b2af2074ba6f 100644
--- a/lib/spdm_requester.c
+++ b/lib/spdm_requester.c
@@ -489,6 +489,7 @@ static int spdm_err(struct device *dev, struct spdm_error_rsp *rsp)
* responder's signatures.
* @root_keyring: Keyring against which to check the first certificate in
* responder's certificate chain.
+ * @validate: Function to validate additional leaf certificate requirements.
*/
struct spdm_state {
struct mutex lock;
@@ -520,6 +521,7 @@ struct spdm_state {
/* Certificates */
struct public_key *leaf_key;
struct key *root_keyring;
+ spdm_validate *validate;
};

static int __spdm_exchange(struct spdm_state *spdm_state,
@@ -1003,6 +1005,13 @@ static int spdm_validate_cert_chain(struct spdm_state *spdm_state, u8 slot,
}

prev = NULL;
+
+ if (spdm_state->validate) {
+ rc = spdm_state->validate(spdm_state->dev, cert);
+ if (rc)
+ goto err_free_cert;
+ }
+
spdm_state->leaf_key = cert->pub;
cert->pub = NULL;

@@ -1447,12 +1456,14 @@ EXPORT_SYMBOL_GPL(spdm_authenticated);
* @transport_priv: Transport private data
* @transport_sz: Maximum message size the transport is capable of (in bytes)
* @keyring: Trusted root certificates
+ * @validate: Function to validate additional leaf certificate requirements
+ * (optional, may be %NULL)
*
* Returns a pointer to the allocated SPDM session state or NULL on error.
*/
struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
void *transport_priv, u32 transport_sz,
- struct key *keyring)
+ struct key *keyring, spdm_validate *validate)
{
struct spdm_state *spdm_state = kzalloc(sizeof(*spdm_state), GFP_KERNEL);

@@ -1464,6 +1475,7 @@ struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
spdm_state->transport_priv = transport_priv;
spdm_state->transport_sz = transport_sz;
spdm_state->root_keyring = keyring;
+ spdm_state->validate = validate;

mutex_init(&spdm_state->lock);

--
2.40.1

2023-09-28 19:00:38

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 10/12] PCI/CMA: Reauthenticate devices on reset and resume

CMA-SPDM state is lost when a device undergoes a Conventional Reset.
(But not a Function Level Reset, PCIe r6.1 sec 6.6.2.) A D3cold to D0
transition implies a Conventional Reset (PCIe r6.1 sec 5.8).

Thus, reauthenticate devices on resume from D3cold and on recovery from
a Secondary Bus Reset or DPC-induced Hot Reset.

The requirement to reauthenticate devices on resume from system sleep
(and in the future reestablish IDE encryption) is the reason why SPDM
needs to be in-kernel: During ->resume_noirq, which is the first phase
after system sleep, the PCI core walks down the hierarchy, puts each
device in D0, restores its config space and invokes the driver's
->resume_noirq callback. The driver is afforded the right to access the
device already during this phase.

To retain this usage model in the face of authentication and encryption,
CMA-SPDM reauthentication and IDE reestablishment must happen during the
->resume_noirq phase, before the driver's first access to the device.
The driver is thus afforded seamless authenticated and encrypted access
until the last moment before suspend and from the first moment after
resume.

During the ->resume_noirq phase, device interrupts are not yet enabled.
It is thus impossible to defer CMA-SPDM reauthentication to a user space
component on an attached disk or on the network, making an in-kernel
SPDM implementation mandatory.

The same catch-22 exists on recovery from a Conventional Reset: A user
space SPDM implementation might live on a device which underwent reset,
rendering its execution impossible.

Signed-off-by: Lukas Wunner <[email protected]>
---
drivers/pci/cma.c | 10 ++++++++++
drivers/pci/pci-driver.c | 1 +
drivers/pci/pci.c | 12 ++++++++++--
drivers/pci/pci.h | 5 +++++
drivers/pci/pcie/err.c | 3 +++
include/linux/pci.h | 1 +
6 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/cma.c b/drivers/pci/cma.c
index 012190c54ab6..89d23fdc37ec 100644
--- a/drivers/pci/cma.c
+++ b/drivers/pci/cma.c
@@ -71,6 +71,16 @@ void pci_cma_init(struct pci_dev *pdev)
}

rc = spdm_authenticate(pdev->spdm_state);
+ if (rc != -EPROTONOSUPPORT)
+ pdev->cma_capable = true;
+}
+
+int pci_cma_reauthenticate(struct pci_dev *pdev)
+{
+ if (!pdev->cma_capable)
+ return -ENOTTY;
+
+ return spdm_authenticate(pdev->spdm_state);
}

void pci_cma_destroy(struct pci_dev *pdev)
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index a79c110c7e51..b5d47eefe8df 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -568,6 +568,7 @@ static void pci_pm_default_resume_early(struct pci_dev *pci_dev)
pci_pm_power_up_and_verify_state(pci_dev);
pci_restore_state(pci_dev);
pci_pme_restore(pci_dev);
+ pci_cma_reauthenticate(pci_dev);
}

static void pci_pm_bridge_power_up_actions(struct pci_dev *pci_dev)
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 59c01d68c6d5..0f36e6082579 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -5248,8 +5248,16 @@ static int pci_reset_bus_function(struct pci_dev *dev, bool probe)

rc = pci_dev_reset_slot_function(dev, probe);
if (rc != -ENOTTY)
- return rc;
- return pci_parent_bus_reset(dev, probe);
+ goto done;
+
+ rc = pci_parent_bus_reset(dev, probe);
+
+done:
+ /* CMA-SPDM state is lost upon a Conventional Reset */
+ if (!probe)
+ pci_cma_reauthenticate(dev);
+
+ return rc;
}

void pci_dev_lock(struct pci_dev *dev)
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 6c4755a2c91c..71092ccf4fbd 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -325,11 +325,16 @@ static inline void pci_doe_disconnected(struct pci_dev *pdev) { }
#ifdef CONFIG_PCI_CMA
void pci_cma_init(struct pci_dev *pdev);
void pci_cma_destroy(struct pci_dev *pdev);
+int pci_cma_reauthenticate(struct pci_dev *pdev);
struct x509_certificate;
int pci_cma_validate(struct device *dev, struct x509_certificate *leaf_cert);
#else
static inline void pci_cma_init(struct pci_dev *pdev) { }
static inline void pci_cma_destroy(struct pci_dev *pdev) { }
+static inline int pci_cma_reauthenticate(struct pci_dev *pdev)
+{
+ return -ENOTTY;
+}
#endif

/**
diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
index 59c90d04a609..4783bd907b54 100644
--- a/drivers/pci/pcie/err.c
+++ b/drivers/pci/pcie/err.c
@@ -122,6 +122,9 @@ static int report_slot_reset(struct pci_dev *dev, void *data)
pci_ers_result_t vote, *result = data;
const struct pci_error_handlers *err_handler;

+ /* CMA-SPDM state is lost upon a Conventional Reset */
+ pci_cma_reauthenticate(dev);
+
device_lock(&dev->dev);
pdrv = dev->driver;
if (!pdrv ||
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 0c0123317df6..2bc11d8b567e 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -519,6 +519,7 @@ struct pci_dev {
#endif
#ifdef CONFIG_PCI_CMA
struct spdm_state *spdm_state; /* Security Protocol and Data Model */
+ unsigned int cma_capable:1; /* Authentication supported */
#endif
u16 acs_cap; /* ACS Capability offset */
phys_addr_t rom; /* Physical address if not from BAR */
--
2.40.1

2023-09-28 19:13:56

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 12/12] PCI/CMA: Grant guests exclusive control of authentication

At any given time, only a single entity in a physical system may have
an SPDM connection to a device. That's because the GET_VERSION request
(which begins an authentication sequence) resets "the connection and all
context associated with that connection" (SPDM 1.3.0 margin no 158).

Thus, when a device is passed through to a guest and the guest has
authenticated it, a subsequent authentication by the host would reset
the device's CMA-SPDM session behind the guest's back.

Prevent by letting the guest claim exclusive CMA ownership of the device
during passthrough. Refuse CMA reauthentication on the host as long.
After passthrough has concluded, reauthenticate the device on the host.

Store the flag indicating guest ownership in struct pci_dev's priv_flags
to avoid the concurrency issues observed by commit 44bda4b7d26e ("PCI:
Fix is_added/is_busmaster race condition").

Side note: The Data Object Exchange r1.1 ECN (published Oct 11 2022)
retrofits DOE with Connection IDs. In theory these allow simultaneous
CMA-SPDM connections by multiple entities to the same device. But the
first hardware generation capable of CMA-SPDM only supports DOE r1.0.
The specification also neglects to reserve unique Connection IDs for
hosts and guests, which further limits its usefulness.

In general, forcing the transport to compensate for SPDM's lack of a
connection identifier feels like a questionable layering violation.

Signed-off-by: Lukas Wunner <[email protected]>
Cc: Alex Williamson <[email protected]>
---
drivers/pci/cma.c | 41 ++++++++++++++++++++++++++++++++
drivers/pci/pci.h | 1 +
drivers/vfio/pci/vfio_pci_core.c | 9 +++++--
include/linux/pci.h | 8 +++++++
include/linux/spdm.h | 2 ++
lib/spdm_requester.c | 11 +++++++++
6 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/cma.c b/drivers/pci/cma.c
index c539ad85a28f..b3eee137ffe2 100644
--- a/drivers/pci/cma.c
+++ b/drivers/pci/cma.c
@@ -82,9 +82,50 @@ int pci_cma_reauthenticate(struct pci_dev *pdev)
if (!pdev->cma_capable)
return -ENOTTY;

+ if (test_bit(PCI_CMA_OWNED_BY_GUEST, &pdev->priv_flags))
+ return -EPERM;
+
return spdm_authenticate(pdev->spdm_state);
}

+#if IS_ENABLED(CONFIG_VFIO_PCI_CORE)
+/**
+ * pci_cma_claim_ownership() - Claim exclusive CMA-SPDM control for guest VM
+ * @pdev: PCI device
+ *
+ * Claim exclusive CMA-SPDM control for a guest virtual machine before
+ * passthrough of @pdev. The host refrains from performing CMA-SPDM
+ * authentication of the device until passthrough has concluded.
+ *
+ * Necessary because the GET_VERSION request resets the SPDM connection
+ * and DOE r1.0 allows only a single SPDM connection for the entire system.
+ * So the host could reset the guest's SPDM connection behind the guest's back.
+ */
+void pci_cma_claim_ownership(struct pci_dev *pdev)
+{
+ set_bit(PCI_CMA_OWNED_BY_GUEST, &pdev->priv_flags);
+
+ if (pdev->cma_capable)
+ spdm_await(pdev->spdm_state);
+}
+EXPORT_SYMBOL(pci_cma_claim_ownership);
+
+/**
+ * pci_cma_return_ownership() - Relinquish CMA-SPDM control to the host
+ * @pdev: PCI device
+ *
+ * Relinquish CMA-SPDM control to the host after passthrough of @pdev to a
+ * guest virtual machine has concluded.
+ */
+void pci_cma_return_ownership(struct pci_dev *pdev)
+{
+ clear_bit(PCI_CMA_OWNED_BY_GUEST, &pdev->priv_flags);
+
+ pci_cma_reauthenticate(pdev);
+}
+EXPORT_SYMBOL(pci_cma_return_ownership);
+#endif
+
void pci_cma_destroy(struct pci_dev *pdev)
{
if (pdev->spdm_state)
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index d80cc06be0cc..05ae6359b152 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -388,6 +388,7 @@ static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
#define PCI_DEV_ADDED 0
#define PCI_DPC_RECOVERED 1
#define PCI_DPC_RECOVERING 2
+#define PCI_CMA_OWNED_BY_GUEST 3

static inline void pci_dev_assign_added(struct pci_dev *dev, bool added)
{
diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index 1929103ee59a..6f300664a342 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -487,10 +487,12 @@ int vfio_pci_core_enable(struct vfio_pci_core_device *vdev)
if (ret)
goto out_power;

+ pci_cma_claim_ownership(pdev);
+
/* If reset fails because of the device lock, fail this path entirely */
ret = pci_try_reset_function(pdev);
if (ret == -EAGAIN)
- goto out_disable_device;
+ goto out_cma_return;

vdev->reset_works = !ret;
pci_save_state(pdev);
@@ -549,7 +551,8 @@ int vfio_pci_core_enable(struct vfio_pci_core_device *vdev)
out_free_state:
kfree(vdev->pci_saved_state);
vdev->pci_saved_state = NULL;
-out_disable_device:
+out_cma_return:
+ pci_cma_return_ownership(pdev);
pci_disable_device(pdev);
out_power:
if (!disable_idle_d3)
@@ -678,6 +681,8 @@ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev)

vfio_pci_dev_set_try_reset(vdev->vdev.dev_set);

+ pci_cma_return_ownership(pdev);
+
/* Put the pm-runtime usage counter acquired during enable */
if (!disable_idle_d3)
pm_runtime_put(&pdev->dev);
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 2c5fde81bb85..c14ea0e74fc4 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -2386,6 +2386,14 @@ static inline resource_size_t pci_iov_resource_size(struct pci_dev *dev, int res
static inline void pci_vf_drivers_autoprobe(struct pci_dev *dev, bool probe) { }
#endif

+#ifdef CONFIG_PCI_CMA
+void pci_cma_claim_ownership(struct pci_dev *pdev);
+void pci_cma_return_ownership(struct pci_dev *pdev);
+#else
+static inline void pci_cma_claim_ownership(struct pci_dev *pdev) { }
+static inline void pci_cma_return_ownership(struct pci_dev *pdev) { }
+#endif
+
#if defined(CONFIG_HOTPLUG_PCI) || defined(CONFIG_HOTPLUG_PCI_MODULE)
void pci_hp_create_module_link(struct pci_slot *pci_slot);
void pci_hp_remove_module_link(struct pci_slot *pci_slot);
diff --git a/include/linux/spdm.h b/include/linux/spdm.h
index 69a83bc2eb41..d796127fbe9a 100644
--- a/include/linux/spdm.h
+++ b/include/linux/spdm.h
@@ -34,6 +34,8 @@ int spdm_authenticate(struct spdm_state *spdm_state);

bool spdm_authenticated(struct spdm_state *spdm_state);

+void spdm_await(struct spdm_state *spdm_state);
+
void spdm_destroy(struct spdm_state *spdm_state);

#endif
diff --git a/lib/spdm_requester.c b/lib/spdm_requester.c
index b2af2074ba6f..99424d6aebf5 100644
--- a/lib/spdm_requester.c
+++ b/lib/spdm_requester.c
@@ -1483,6 +1483,17 @@ struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
}
EXPORT_SYMBOL_GPL(spdm_create);

+/**
+ * spdm_await() - Wait for ongoing spdm_authenticate() to finish
+ *
+ * @spdm_state: SPDM session state
+ */
+void spdm_await(struct spdm_state *spdm_state)
+{
+ mutex_lock(&spdm_state->lock);
+ mutex_unlock(&spdm_state->lock);
+}
+
/**
* spdm_destroy() - Destroy SPDM session
*
--
2.40.1

2023-10-02 16:59:58

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 03/12] X.509: Move certificate length retrieval into new helper

On Thu, 28 Sep 2023 19:32:32 +0200
Lukas Wunner <[email protected]> wrote:

> The upcoming in-kernel SPDM library (Security Protocol and Data Model,
> https://www.dmtf.org/dsp/DSP0274) needs to retrieve the length from
> ASN.1 DER-encoded X.509 certificates.
>
> Such code already exists in x509_load_certificate_list(), so move it
> into a new helper for reuse by SPDM.
>
> No functional change intended.
>
> Signed-off-by: Lukas Wunner <[email protected]>

Good find :) I vaguely remember carrying a hack for this so
good to do something more general + save on the duplication.

Reviewed-by: Jonathan Cameron <[email protected]>


> ---
> crypto/asymmetric_keys/x509_loader.c | 38 +++++++++++++++++++---------
> include/keys/asymmetric-type.h | 2 ++
> 2 files changed, 28 insertions(+), 12 deletions(-)
>
> diff --git a/crypto/asymmetric_keys/x509_loader.c b/crypto/asymmetric_keys/x509_loader.c
> index a41741326998..121460a0de46 100644
> --- a/crypto/asymmetric_keys/x509_loader.c
> +++ b/crypto/asymmetric_keys/x509_loader.c
> @@ -4,28 +4,42 @@
> #include <linux/key.h>
> #include <keys/asymmetric-type.h>
>
> +int x509_get_certificate_length(const u8 *p, unsigned long buflen)
> +{
> + int plen;
> +
> + /* Each cert begins with an ASN.1 SEQUENCE tag and must be more
> + * than 256 bytes in size.
> + */
> + if (buflen < 4)
> + return -EINVAL;
> +
> + if (p[0] != 0x30 &&
> + p[1] != 0x82)
> + return -EINVAL;
> +
> + plen = (p[2] << 8) | p[3];
> + plen += 4;
> + if (plen > buflen)
> + return -EINVAL;
> +
> + return plen;
> +}
> +EXPORT_SYMBOL_GPL(x509_get_certificate_length);
> +
> int x509_load_certificate_list(const u8 cert_list[],
> const unsigned long list_size,
> const struct key *keyring)
> {
> key_ref_t key;
> const u8 *p, *end;
> - size_t plen;
> + int plen;
>
> p = cert_list;
> end = p + list_size;
> while (p < end) {
> - /* Each cert begins with an ASN.1 SEQUENCE tag and must be more
> - * than 256 bytes in size.
> - */
> - if (end - p < 4)
> - goto dodgy_cert;
> - if (p[0] != 0x30 &&
> - p[1] != 0x82)
> - goto dodgy_cert;
> - plen = (p[2] << 8) | p[3];
> - plen += 4;
> - if (plen > end - p)
> + plen = x509_get_certificate_length(p, end - p);
> + if (plen < 0)
> goto dodgy_cert;
>
> key = key_create_or_update(make_key_ref(keyring, 1),
> diff --git a/include/keys/asymmetric-type.h b/include/keys/asymmetric-type.h
> index 69a13e1e5b2e..6705cfde25b9 100644
> --- a/include/keys/asymmetric-type.h
> +++ b/include/keys/asymmetric-type.h
> @@ -84,6 +84,8 @@ extern struct key *find_asymmetric_key(struct key *keyring,
> const struct asymmetric_key_id *id_2,
> bool partial);
>
> +int x509_get_certificate_length(const u8 *p, unsigned long buflen);
> +
> int x509_load_certificate_list(const u8 cert_list[], const unsigned long list_size,
> const struct key *keyring);
>

2023-10-02 17:12:47

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 06/12] crypto: ecdsa - Support P1363 signature encoding

On Thu, 28 Sep 2023 19:32:36 +0200
Lukas Wunner <[email protected]> wrote:

> Alternatively to the X9.62 encoding of ecdsa signatures, which uses
> ASN.1 and is already supported by the kernel, there's another common
> encoding called P1363. It stores r and s as the concatenation of two
> big endian, unsigned integers. The name originates from IEEE P1363.
>
> The Security Protocol and Data Model (SPDM) specification prescribes
> that ecdsa signatures are encoded according to P1363:
>
> "For ECDSA signatures, excluding SM2, in SPDM, the signature shall be
> the concatenation of r and s. The size of r shall be the size of
> the selected curve. Likewise, the size of s shall be the size of
> the selected curve. See BaseAsymAlgo in NEGOTIATE_ALGORITHMS for
> the size of r and s. The byte order for r and s shall be in big
> endian order. When placing ECDSA signatures into an SPDM signature
> field, r shall come first followed by s."
>
> (SPDM 1.2.1 margin no 44,
> https://www.dmtf.org/sites/default/files/standards/documents/DSP0274_1.2.1.pdf)
>
> A subsequent commit introduces an SPDM library to enable PCI device
> authentication, so add support for P1363 ecdsa signature verification.

Ah good. The spec got updated. I remember playing guess with the format
against libspdm which wasn't fun :)

One trivial formatting note inline.

>
> Signed-off-by: Lukas Wunner <[email protected]>

Reviewed-by: Jonathan Cameron <[email protected]>


> ---
> crypto/asymmetric_keys/public_key.c | 8 ++++++--
> crypto/ecdsa.c | 16 +++++++++++++---
> crypto/testmgr.h | 15 +++++++++++++++
> 3 files changed, 34 insertions(+), 5 deletions(-)
>
> diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
> index 7f96e8e501db..84c4ed02a270 100644
> --- a/crypto/asymmetric_keys/public_key.c
> +++ b/crypto/asymmetric_keys/public_key.c
> @@ -105,7 +105,8 @@ software_key_determine_akcipher(const struct public_key *pkey,
> return -EINVAL;
> *sig = false;
> } else if (strncmp(pkey->pkey_algo, "ecdsa", 5) == 0) {
> - if (strcmp(encoding, "x962") != 0)
> + if (strcmp(encoding, "x962") != 0 &&
> + strcmp(encoding, "p1363") != 0)
> return -EINVAL;
> /*
> * ECDSA signatures are taken over a raw hash, so they don't
> @@ -246,7 +247,10 @@ static int software_key_query(const struct kernel_pkey_params *params,
> * which is actually 2 'key_size'-bit integers encoded in
> * ASN.1. Account for the ASN.1 encoding overhead here.
> */
> - info->max_sig_size = 2 * (len + 3) + 2;
> + if (strcmp(params->encoding, "x962") == 0)
> + info->max_sig_size = 2 * (len + 3) + 2;
> + else if (strcmp(params->encoding, "p1363") == 0)
> + info->max_sig_size = 2 * len;
> } else {
> info->max_data_size = len;
> info->max_sig_size = len;
> diff --git a/crypto/ecdsa.c b/crypto/ecdsa.c
> index fbd76498aba8..cc3082c6f67d 100644
> --- a/crypto/ecdsa.c
> +++ b/crypto/ecdsa.c
> @@ -159,10 +159,20 @@ static int ecdsa_verify(struct akcipher_request *req)
> sg_nents_for_len(req->src, req->src_len + req->dst_len),
> buffer, req->src_len + req->dst_len, 0);
>
> - ret = asn1_ber_decoder(&ecdsasignature_decoder, &sig_ctx,
> - buffer, req->src_len);
> - if (ret < 0)
> + if (strcmp(req->enc, "x962") == 0) {
> + ret = asn1_ber_decoder(&ecdsasignature_decoder, &sig_ctx,
> + buffer, req->src_len);
> + if (ret < 0)
> + goto error;
> + } else if (strcmp(req->enc, "p1363") == 0 &&
> + req->src_len == 2 * keylen) {
> + ecc_swap_digits(buffer, sig_ctx.r, ctx->curve->g.ndigits);
> + ecc_swap_digits(buffer + keylen,
> + sig_ctx.s, ctx->curve->g.ndigits);

Indent looks a little odd.


> + } else {
> + ret = -EINVAL;
> goto error;
> + }
>
> /* if the hash is shorter then we will add leading zeros to fit to ndigits */
> diff = keylen - req->dst_len;
> diff --git a/crypto/testmgr.h b/crypto/testmgr.h
> index ad57e7af2e14..f12f70818147 100644
> --- a/crypto/testmgr.h
> +++ b/crypto/testmgr.h
> @@ -674,6 +674,7 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
> "\x68\x01\x9d\xba\xce\x83\x08\xef\x95\x52\x7b\xa0\x0f\xe4\x18\x86"
> "\x80\x6f\xa5\x79\x77\xda\xd0",
> .c_size = 55,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> }, {
> @@ -698,6 +699,7 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
> "\x4f\x53\x75\xc8\x02\x48\xeb\xc3\x92\x0f\x1e\x72\xee\xc4\xa3\xe3"
> "\x5c\x99\xdb\x92\x5b\x36",
> .c_size = 54,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> }, {
> @@ -722,6 +724,7 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
> "\x69\x43\xfd\x48\x19\x86\xcf\x32\xdd\x41\x74\x6a\x51\xc7\xd9\x7d"
> "\x3a\x97\xd9\xcd\x1a\x6a\x49",
> .c_size = 55,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> }, {
> @@ -747,6 +750,7 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
> "\xbc\x5a\x1f\x82\x96\x61\xd7\xd1\x01\x77\x44\x5d\x53\xa4\x7c\x93"
> "\x12\x3b\x3b\x28\xfb\x6d\xe1",
> .c_size = 55,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> }, {
> @@ -773,6 +777,7 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
> "\xb4\x22\x9a\x98\x73\x3c\x83\xa9\x14\x2a\x5e\xf5\xe5\xfb\x72\x28"
> "\x6a\xdf\x97\xfd\x82\x76\x24",
> .c_size = 55,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> },
> @@ -803,6 +808,7 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
> "\x8a\xfa\x54\x93\x29\xa7\x70\x86\xf1\x03\x03\xf3\x3b\xe2\x73\xf7"
> "\xfb\x9d\x8b\xde\xd4\x8d\x6f\xad",
> .c_size = 72,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> }, {
> @@ -829,6 +835,7 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
> "\x4a\x77\x22\xec\xc8\x66\xbf\x50\x05\x58\x39\x0e\x26\x92\xce\xd5"
> "\x2e\x8b\xde\x5a\x04\x0e",
> .c_size = 70,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> }, {
> @@ -855,6 +862,7 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
> "\xa9\x81\xac\x4a\x50\xd0\x91\x0a\x6e\x1b\xc4\xaf\xe1\x83\xc3\x4f"
> "\x2a\x65\x35\x23\xe3\x1d\xfa",
> .c_size = 71,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> }, {
> @@ -882,6 +890,7 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
> "\x19\xfb\x5f\x92\xf4\xc9\x23\x37\x69\xf4\x3b\x4f\x47\xcf\x9b\x16"
> "\xc0\x60\x11\x92\xdc\x17\x89\x12",
> .c_size = 72,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> }, {
> @@ -910,6 +919,7 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
> "\x00\xdd\xab\xd4\xc0\x2b\xe6\x5c\xad\xc3\x78\x1c\xc2\xc1\x19\x76"
> "\x31\x79\x4a\xe9\x81\x6a\xee",
> .c_size = 71,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> },
> @@ -944,6 +954,7 @@ static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
> "\x74\xa0\x0f\xbf\xaf\xc3\x36\x76\x4a\xa1\x59\xf1\x1c\xa4\x58\x26"
> "\x79\x12\x2a\xb7\xc5\x15\x92\xc5",
> .c_size = 104,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> }, {
> @@ -974,6 +985,7 @@ static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
> "\x4d\xd0\xc6\x6e\xb0\xe9\xfc\x14\x9f\x19\xd0\x42\x8b\x93\xc2\x11"
> "\x88\x2b\x82\x26\x5e\x1c\xda\xfb",
> .c_size = 104,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> }, {
> @@ -1004,6 +1016,7 @@ static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
> "\xc0\x75\x3e\x23\x5e\x36\x4f\x8d\xde\x1e\x93\x8d\x95\xbb\x10\x0e"
> "\xf4\x1f\x39\xca\x4d\x43",
> .c_size = 102,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> }, {
> @@ -1035,6 +1048,7 @@ static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
> "\x44\x92\x8c\x86\x99\x65\xb3\x97\x96\x17\x04\xc9\x05\x77\xf1\x8e"
> "\xab\x8d\x4e\xde\xe6\x6d\x9b\x66",
> .c_size = 104,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> }, {
> @@ -1067,6 +1081,7 @@ static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
> "\x5f\x8d\x7a\xf9\xfb\x34\xe4\x8b\x80\xa5\xb6\xda\x2c\x4e\x45\xcf"
> "\x3c\x93\xff\x50\x5d",
> .c_size = 101,
> + .enc = "x962",
> .public_key_vec = true,
> .siggen_sigver_test = true,
> },

2023-10-02 17:26:59

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 05/12] crypto: akcipher - Support more than one signature encoding

On Thu, 28 Sep 2023 19:32:35 +0200
Lukas Wunner <[email protected]> wrote:

> Currently only a single default signature encoding is supported per
> akcipher.
>
> A subsequent commit will allow a second encoding for ecdsa, namely P1363
> alternatively to X9.62.
>
> To accommodate for that, amend struct akcipher_request and struct
> crypto_akcipher_sync_data to store the desired signature encoding for
> verify and sign ops.
>
> Amend akcipher_request_set_crypt(), crypto_sig_verify() and
> crypto_sig_sign() with an additional parameter which specifies the
> desired signature encoding. Adjust all callers.
>
> Signed-off-by: Lukas Wunner <[email protected]>

Reviewed-by: Jonathan Cameron <[email protected]>

> ---
> crypto/akcipher.c | 2 +-
> crypto/asymmetric_keys/public_key.c | 4 ++--
> crypto/internal.h | 1 +
> crypto/rsa-pkcs1pad.c | 11 +++++++----
> crypto/sig.c | 6 ++++--
> crypto/testmgr.c | 8 +++++---
> crypto/testmgr.h | 1 +
> include/crypto/akcipher.h | 10 +++++++++-
> include/crypto/sig.h | 6 ++++--
> 9 files changed, 34 insertions(+), 15 deletions(-)
>
> diff --git a/crypto/akcipher.c b/crypto/akcipher.c
> index 52813f0b19e4..88501c0886d2 100644
> --- a/crypto/akcipher.c
> +++ b/crypto/akcipher.c
> @@ -221,7 +221,7 @@ int crypto_akcipher_sync_prep(struct crypto_akcipher_sync_data *data)
> sg = &data->sg;
> sg_init_one(sg, buf, mlen);
> akcipher_request_set_crypt(req, sg, data->dst ? sg : NULL,
> - data->slen, data->dlen);
> + data->slen, data->dlen, data->enc);
>
> crypto_init_wait(&data->cwait);
> akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
> diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
> index abeecb8329b3..7f96e8e501db 100644
> --- a/crypto/asymmetric_keys/public_key.c
> +++ b/crypto/asymmetric_keys/public_key.c
> @@ -354,7 +354,7 @@ static int software_key_eds_op(struct kernel_pkey_params *params,
> if (!issig)
> break;
> ret = crypto_sig_sign(sig, in, params->in_len,
> - out, params->out_len);
> + out, params->out_len, params->encoding);
> break;
> default:
> BUG();
> @@ -438,7 +438,7 @@ int public_key_verify_signature(const struct public_key *pkey,
> goto error_free_key;
>
> ret = crypto_sig_verify(tfm, sig->s, sig->s_size,
> - sig->digest, sig->digest_size);
> + sig->digest, sig->digest_size, sig->encoding);
>
> error_free_key:
> kfree_sensitive(key);
> diff --git a/crypto/internal.h b/crypto/internal.h
> index 63e59240d5fb..268315b13ccd 100644
> --- a/crypto/internal.h
> +++ b/crypto/internal.h
> @@ -41,6 +41,7 @@ struct crypto_akcipher_sync_data {
> void *dst;
> unsigned int slen;
> unsigned int dlen;
> + const char *enc;
>
> struct akcipher_request *req;
> struct crypto_wait cwait;
> diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
> index d2e5e104f8cf..5f9313a3b01e 100644
> --- a/crypto/rsa-pkcs1pad.c
> +++ b/crypto/rsa-pkcs1pad.c
> @@ -262,7 +262,8 @@ static int pkcs1pad_encrypt(struct akcipher_request *req)
>
> /* Reuse output buffer */
> akcipher_request_set_crypt(&req_ctx->child_req, req_ctx->in_sg,
> - req->dst, ctx->key_size - 1, req->dst_len);
> + req->dst, ctx->key_size - 1, req->dst_len,
> + NULL);
>
> err = crypto_akcipher_encrypt(&req_ctx->child_req);
> if (err != -EINPROGRESS && err != -EBUSY)
> @@ -362,7 +363,7 @@ static int pkcs1pad_decrypt(struct akcipher_request *req)
> /* Reuse input buffer, output to a new buffer */
> akcipher_request_set_crypt(&req_ctx->child_req, req->src,
> req_ctx->out_sg, req->src_len,
> - ctx->key_size);
> + ctx->key_size, NULL);
>
> err = crypto_akcipher_decrypt(&req_ctx->child_req);
> if (err != -EINPROGRESS && err != -EBUSY)
> @@ -419,7 +420,8 @@ static int pkcs1pad_sign(struct akcipher_request *req)
>
> /* Reuse output buffer */
> akcipher_request_set_crypt(&req_ctx->child_req, req_ctx->in_sg,
> - req->dst, ctx->key_size - 1, req->dst_len);
> + req->dst, ctx->key_size - 1, req->dst_len,
> + req->enc);
>
> err = crypto_akcipher_decrypt(&req_ctx->child_req);
> if (err != -EINPROGRESS && err != -EBUSY)
> @@ -551,7 +553,8 @@ static int pkcs1pad_verify(struct akcipher_request *req)
>
> /* Reuse input buffer, output to a new buffer */
> akcipher_request_set_crypt(&req_ctx->child_req, req->src,
> - req_ctx->out_sg, sig_size, ctx->key_size);
> + req_ctx->out_sg, sig_size, ctx->key_size,
> + req->enc);
>
> err = crypto_akcipher_encrypt(&req_ctx->child_req);
> if (err != -EINPROGRESS && err != -EBUSY)
> diff --git a/crypto/sig.c b/crypto/sig.c
> index 224c47019297..4fc1a8f865e4 100644
> --- a/crypto/sig.c
> +++ b/crypto/sig.c
> @@ -89,7 +89,7 @@ EXPORT_SYMBOL_GPL(crypto_sig_maxsize);
>
> int crypto_sig_sign(struct crypto_sig *tfm,
> const void *src, unsigned int slen,
> - void *dst, unsigned int dlen)
> + void *dst, unsigned int dlen, const char *enc)
> {
> struct crypto_akcipher **ctx = crypto_sig_ctx(tfm);
> struct crypto_akcipher_sync_data data = {
> @@ -98,6 +98,7 @@ int crypto_sig_sign(struct crypto_sig *tfm,
> .dst = dst,
> .slen = slen,
> .dlen = dlen,
> + .enc = enc,
> };
>
> return crypto_akcipher_sync_prep(&data) ?:
> @@ -108,7 +109,7 @@ EXPORT_SYMBOL_GPL(crypto_sig_sign);
>
> int crypto_sig_verify(struct crypto_sig *tfm,
> const void *src, unsigned int slen,
> - const void *digest, unsigned int dlen)
> + const void *digest, unsigned int dlen, const char *enc)
> {
> struct crypto_akcipher **ctx = crypto_sig_ctx(tfm);
> struct crypto_akcipher_sync_data data = {
> @@ -116,6 +117,7 @@ int crypto_sig_verify(struct crypto_sig *tfm,
> .src = src,
> .slen = slen,
> .dlen = dlen,
> + .enc = enc,
> };
> int err;
>
> diff --git a/crypto/testmgr.c b/crypto/testmgr.c
> index 216878c8bc3d..d5dd715673dd 100644
> --- a/crypto/testmgr.c
> +++ b/crypto/testmgr.c
> @@ -4154,11 +4154,12 @@ static int test_akcipher_one(struct crypto_akcipher *tfm,
> goto free_all;
> memcpy(xbuf[1], c, c_size);
> sg_set_buf(&src_tab[2], xbuf[1], c_size);
> - akcipher_request_set_crypt(req, src_tab, NULL, m_size, c_size);
> + akcipher_request_set_crypt(req, src_tab, NULL, m_size, c_size,
> + vecs->enc);
> } else {
> sg_init_one(&dst, outbuf_enc, out_len_max);
> akcipher_request_set_crypt(req, src_tab, &dst, m_size,
> - out_len_max);
> + out_len_max, NULL);
> }
> akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
> crypto_req_done, &wait);
> @@ -4217,7 +4218,8 @@ static int test_akcipher_one(struct crypto_akcipher *tfm,
> sg_init_one(&src, xbuf[0], c_size);
> sg_init_one(&dst, outbuf_dec, out_len_max);
> crypto_init_wait(&wait);
> - akcipher_request_set_crypt(req, &src, &dst, c_size, out_len_max);
> + akcipher_request_set_crypt(req, &src, &dst, c_size, out_len_max,
> + vecs->enc);
>
> err = crypto_wait_req(vecs->siggen_sigver_test ?
> /* Run asymmetric signature generation */
> diff --git a/crypto/testmgr.h b/crypto/testmgr.h
> index 5ca7a412508f..ad57e7af2e14 100644
> --- a/crypto/testmgr.h
> +++ b/crypto/testmgr.h
> @@ -153,6 +153,7 @@ struct akcipher_testvec {
> const unsigned char *params;
> const unsigned char *m;
> const unsigned char *c;
> + const char *enc;
> unsigned int key_len;
> unsigned int param_len;
> unsigned int m_size;
> diff --git a/include/crypto/akcipher.h b/include/crypto/akcipher.h
> index 670508f1dca1..00bbec69af3b 100644
> --- a/include/crypto/akcipher.h
> +++ b/include/crypto/akcipher.h
> @@ -30,6 +30,8 @@
> * In case of error where the dst sgl size was insufficient,
> * it will be updated to the size required for the operation.
> * For verify op this is size of digest part in @src.
> + * @enc: For verify op it's the encoding of the signature part of @src.
> + * For sign op it's the encoding of the signature in @dst.
> * @__ctx: Start of private context data
> */
> struct akcipher_request {
> @@ -38,6 +40,7 @@ struct akcipher_request {
> struct scatterlist *dst;
> unsigned int src_len;
> unsigned int dst_len;
> + const char *enc;
> void *__ctx[] CRYPTO_MINALIGN_ATTR;
> };
>
> @@ -272,17 +275,22 @@ static inline void akcipher_request_set_callback(struct akcipher_request *req,
> * @src_len: size of the src input scatter list to be processed
> * @dst_len: size of the dst output scatter list or size of signature
> * portion in @src for verify op
> + * @enc: encoding of signature portion in @src for verify op,
> + * encoding of signature in @dst for sign op,
> + * NULL for encrypt and decrypt op
> */
> static inline void akcipher_request_set_crypt(struct akcipher_request *req,
> struct scatterlist *src,
> struct scatterlist *dst,
> unsigned int src_len,
> - unsigned int dst_len)
> + unsigned int dst_len,
> + const char *enc)
> {
> req->src = src;
> req->dst = dst;
> req->src_len = src_len;
> req->dst_len = dst_len;
> + req->enc = enc;
> }
>
> /**
> diff --git a/include/crypto/sig.h b/include/crypto/sig.h
> index 641b4714c448..1df18005c854 100644
> --- a/include/crypto/sig.h
> +++ b/include/crypto/sig.h
> @@ -81,12 +81,13 @@ int crypto_sig_maxsize(struct crypto_sig *tfm);
> * @slen: source length
> * @dst: destinatino obuffer
> * @dlen: destination length
> + * @enc: signature encoding
> *
> * Return: zero on success; error code in case of error
> */
> int crypto_sig_sign(struct crypto_sig *tfm,
> const void *src, unsigned int slen,
> - void *dst, unsigned int dlen);
> + void *dst, unsigned int dlen, const char *enc);
>
> /**
> * crypto_sig_verify() - Invoke signature verification
> @@ -99,12 +100,13 @@ int crypto_sig_sign(struct crypto_sig *tfm,
> * @slen: source length
> * @digest: digest
> * @dlen: digest length
> + * @enc: signature encoding
> *
> * Return: zero on verification success; error code in case of error.
> */
> int crypto_sig_verify(struct crypto_sig *tfm,
> const void *src, unsigned int slen,
> - const void *digest, unsigned int dlen);
> + const void *digest, unsigned int dlen, const char *enc);
>
> /**
> * crypto_sig_set_pubkey() - Invoke set public key operation

2023-10-03 07:57:35

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH 01/12] X.509: Make certificate parser public

On Thu, 28 Sep 2023, Lukas Wunner wrote:

> The upcoming support for PCI device authentication with CMA-SPDM
> (PCIe r6.1 sec 6.31) requires validating the Subject Alternative Name
> in X.509 certificates.
>
> High-level functions for X.509 parsing such as key_create_or_update()
> throw away the internal, low-level struct x509_certificate after
> extracting the struct public_key and public_key_signature from it.
> The Subject Alternative Name is thus inaccessible when using those
> functions.
>
> Afford CMA-SPDM access to the Subject Alternative Name by making struct
> x509_certificate public, together with the functions for parsing an
> X.509 certificate into such a struct and freeing such a struct.
>
> The private header file x509_parser.h previously included <linux/time.h>
> for the definition of time64_t. That definition was since moved to
> <linux/time64.h> by commit 361a3bf00582 ("time64: Add time64.h header
> and define struct timespec64"), so adjust the #include directive as part
> of the move to the new public header file <keys/x509-parser.h>.
>
> No functional change intended.
>
> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> crypto/asymmetric_keys/x509_parser.h | 37 +----------------------
> include/keys/x509-parser.h | 44 ++++++++++++++++++++++++++++
> 2 files changed, 45 insertions(+), 36 deletions(-)
> create mode 100644 include/keys/x509-parser.h
>
> diff --git a/crypto/asymmetric_keys/x509_parser.h b/crypto/asymmetric_keys/x509_parser.h
> index a299c9c56f40..a7ef43c39002 100644
> --- a/crypto/asymmetric_keys/x509_parser.h
> +++ b/crypto/asymmetric_keys/x509_parser.h
> @@ -5,40 +5,7 @@
> * Written by David Howells ([email protected])
> */
>
> -#include <linux/time.h>
> -#include <crypto/public_key.h>
> -#include <keys/asymmetric-type.h>
> -
> -struct x509_certificate {
> - struct x509_certificate *next;
> - struct x509_certificate *signer; /* Certificate that signed this one */
> - struct public_key *pub; /* Public key details */
> - struct public_key_signature *sig; /* Signature parameters */
> - char *issuer; /* Name of certificate issuer */
> - char *subject; /* Name of certificate subject */
> - struct asymmetric_key_id *id; /* Issuer + Serial number */
> - struct asymmetric_key_id *skid; /* Subject + subjectKeyId (optional) */
> - time64_t valid_from;
> - time64_t valid_to;
> - const void *tbs; /* Signed data */
> - unsigned tbs_size; /* Size of signed data */
> - unsigned raw_sig_size; /* Size of signature */
> - const void *raw_sig; /* Signature data */
> - const void *raw_serial; /* Raw serial number in ASN.1 */
> - unsigned raw_serial_size;
> - unsigned raw_issuer_size;
> - const void *raw_issuer; /* Raw issuer name in ASN.1 */
> - const void *raw_subject; /* Raw subject name in ASN.1 */
> - unsigned raw_subject_size;
> - unsigned raw_skid_size;
> - const void *raw_skid; /* Raw subjectKeyId in ASN.1 */
> - unsigned index;
> - bool seen; /* Infinite recursion prevention */
> - bool verified;
> - bool self_signed; /* T if self-signed (check unsupported_sig too) */
> - bool unsupported_sig; /* T if signature uses unsupported crypto */
> - bool blacklisted;
> -};
> +#include <keys/x509-parser.h>
>
> /*
> * selftest.c
> @@ -52,8 +19,6 @@ static inline int fips_signature_selftest(void) { return 0; }
> /*
> * x509_cert_parser.c
> */
> -extern void x509_free_certificate(struct x509_certificate *cert);
> -extern struct x509_certificate *x509_cert_parse(const void *data, size_t datalen);
> extern int x509_decode_time(time64_t *_t, size_t hdrlen,
> unsigned char tag,
> const unsigned char *value, size_t vlen);
> diff --git a/include/keys/x509-parser.h b/include/keys/x509-parser.h
> new file mode 100644
> index 000000000000..7c2ebc84791f
> --- /dev/null
> +++ b/include/keys/x509-parser.h
> @@ -0,0 +1,44 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +/* X.509 certificate parser
> + *
> + * Copyright (C) 2012 Red Hat, Inc. All Rights Reserved.
> + * Written by David Howells ([email protected])
> + */

Please add the include guard #ifndef + #define.

Other than that, this looks okay,

Reviewed-by: Ilpo J?rvinen <[email protected]>

--
i.


> +
> +#include <crypto/public_key.h>
> +#include <keys/asymmetric-type.h>
> +#include <linux/time64.h>
> +
> +struct x509_certificate {
> + struct x509_certificate *next;
> + struct x509_certificate *signer; /* Certificate that signed this one */
> + struct public_key *pub; /* Public key details */
> + struct public_key_signature *sig; /* Signature parameters */
> + char *issuer; /* Name of certificate issuer */
> + char *subject; /* Name of certificate subject */
> + struct asymmetric_key_id *id; /* Issuer + Serial number */
> + struct asymmetric_key_id *skid; /* Subject + subjectKeyId (optional) */
> + time64_t valid_from;
> + time64_t valid_to;
> + const void *tbs; /* Signed data */
> + unsigned tbs_size; /* Size of signed data */
> + unsigned raw_sig_size; /* Size of signature */
> + const void *raw_sig; /* Signature data */
> + const void *raw_serial; /* Raw serial number in ASN.1 */
> + unsigned raw_serial_size;
> + unsigned raw_issuer_size;
> + const void *raw_issuer; /* Raw issuer name in ASN.1 */
> + const void *raw_subject; /* Raw subject name in ASN.1 */
> + unsigned raw_subject_size;
> + unsigned raw_skid_size;
> + const void *raw_skid; /* Raw subjectKeyId in ASN.1 */
> + unsigned index;
> + bool seen; /* Infinite recursion prevention */
> + bool verified;
> + bool self_signed; /* T if self-signed (check unsupported_sig too) */
> + bool unsupported_sig; /* T if signature uses unsupported crypto */
> + bool blacklisted;
> +};
> +
> +struct x509_certificate *x509_cert_parse(const void *data, size_t datalen);
> +void x509_free_certificate(struct x509_certificate *cert);

2023-10-03 08:31:58

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH 03/12] X.509: Move certificate length retrieval into new helper

On Thu, 28 Sep 2023, Lukas Wunner wrote:

> The upcoming in-kernel SPDM library (Security Protocol and Data Model,
> https://www.dmtf.org/dsp/DSP0274) needs to retrieve the length from
> ASN.1 DER-encoded X.509 certificates.
>
> Such code already exists in x509_load_certificate_list(), so move it
> into a new helper for reuse by SPDM.
>
> No functional change intended.
>
> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> crypto/asymmetric_keys/x509_loader.c | 38 +++++++++++++++++++---------
> include/keys/asymmetric-type.h | 2 ++
> 2 files changed, 28 insertions(+), 12 deletions(-)
>
> diff --git a/crypto/asymmetric_keys/x509_loader.c b/crypto/asymmetric_keys/x509_loader.c
> index a41741326998..121460a0de46 100644
> --- a/crypto/asymmetric_keys/x509_loader.c
> +++ b/crypto/asymmetric_keys/x509_loader.c
> @@ -4,28 +4,42 @@
> #include <linux/key.h>
> #include <keys/asymmetric-type.h>
>
> +int x509_get_certificate_length(const u8 *p, unsigned long buflen)

Make the return type ssize_t.

unsigned long -> size_t buflen (or perhaps ssize_t if you want to compare
below to have the same signedness).

> +{
> + int plen;

ssize_t

> +
> + /* Each cert begins with an ASN.1 SEQUENCE tag and must be more
> + * than 256 bytes in size.
> + */
> + if (buflen < 4)
> + return -EINVAL;
> +
> + if (p[0] != 0x30 &&
> + p[1] != 0x82)
> + return -EINVAL;
> +
> + plen = (p[2] << 8) | p[3];
> + plen += 4;
> + if (plen > buflen)
> + return -EINVAL;
> +
> + return plen;
> +}
> +EXPORT_SYMBOL_GPL(x509_get_certificate_length);
> +
> int x509_load_certificate_list(const u8 cert_list[],
> const unsigned long list_size,
> const struct key *keyring)
> {
> key_ref_t key;
> const u8 *p, *end;
> - size_t plen;
> + int plen;

ssize_t plen.

--
i.

>
> p = cert_list;
> end = p + list_size;
> while (p < end) {
> - /* Each cert begins with an ASN.1 SEQUENCE tag and must be more
> - * than 256 bytes in size.
> - */
> - if (end - p < 4)
> - goto dodgy_cert;
> - if (p[0] != 0x30 &&
> - p[1] != 0x82)
> - goto dodgy_cert;
> - plen = (p[2] << 8) | p[3];
> - plen += 4;
> - if (plen > end - p)
> + plen = x509_get_certificate_length(p, end - p);
> + if (plen < 0)
> goto dodgy_cert;
>
> key = key_create_or_update(make_key_ref(keyring, 1),
> diff --git a/include/keys/asymmetric-type.h b/include/keys/asymmetric-type.h
> index 69a13e1e5b2e..6705cfde25b9 100644
> --- a/include/keys/asymmetric-type.h
> +++ b/include/keys/asymmetric-type.h
> @@ -84,6 +84,8 @@ extern struct key *find_asymmetric_key(struct key *keyring,
> const struct asymmetric_key_id *id_2,
> bool partial);
>
> +int x509_get_certificate_length(const u8 *p, unsigned long buflen);
> +
> int x509_load_certificate_list(const u8 cert_list[], const unsigned long list_size,
> const struct key *keyring);
>
>

2023-10-03 08:32:06

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH 02/12] X.509: Parse Subject Alternative Name in certificates

On Thu, 28 Sep 2023, Lukas Wunner wrote:

> The upcoming support for PCI device authentication with CMA-SPDM
> (PCIe r6.1 sec 6.31) requires validating the Subject Alternative Name
> in X.509 certificates.
>
> Store a pointer to the Subject Alternative Name upon parsing for
> consumption by CMA-SPDM.
>
> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> crypto/asymmetric_keys/x509_cert_parser.c | 15 +++++++++++++++
> include/keys/x509-parser.h | 2 ++
> 2 files changed, 17 insertions(+)
>
> diff --git a/crypto/asymmetric_keys/x509_cert_parser.c b/crypto/asymmetric_keys/x509_cert_parser.c
> index 0a7049b470c1..18dfd564740b 100644
> --- a/crypto/asymmetric_keys/x509_cert_parser.c
> +++ b/crypto/asymmetric_keys/x509_cert_parser.c
> @@ -579,6 +579,21 @@ int x509_process_extension(void *context, size_t hdrlen,
> return 0;
> }
>
> + if (ctx->last_oid == OID_subjectAltName) {
> + /*
> + * A certificate MUST NOT include more than one instance
> + * of a particular extension (RFC 5280 sec 4.2).
> + */
> + if (ctx->cert->raw_san) {
> + pr_err("Duplicate Subject Alternative Name\n");
> + return -EINVAL;
> + }
> +
> + ctx->cert->raw_san = v;
> + ctx->cert->raw_san_size = vlen;
> + return 0;
> + }
> +
> if (ctx->last_oid == OID_keyUsage) {
> /*
> * Get hold of the keyUsage bit string
> diff --git a/include/keys/x509-parser.h b/include/keys/x509-parser.h
> index 7c2ebc84791f..9c6e7cdf4870 100644
> --- a/include/keys/x509-parser.h
> +++ b/include/keys/x509-parser.h
> @@ -32,6 +32,8 @@ struct x509_certificate {
> unsigned raw_subject_size;
> unsigned raw_skid_size;
> const void *raw_skid; /* Raw subjectKeyId in ASN.1 */
> + const void *raw_san; /* Raw subjectAltName in ASN.1 */
> + unsigned raw_san_size;
> unsigned index;
> bool seen; /* Infinite recursion prevention */
> bool verified;
>

Reviewed-by: Ilpo J?rvinen <[email protected]>

--
i.

2023-10-03 08:38:09

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH 04/12] certs: Create blacklist keyring earlier

On Thu, 28 Sep 2023, Lukas Wunner wrote:

> The upcoming support for PCI device authentication with CMA-SPDM
> (PCIe r6.1 sec 6.31) requires parsing X.509 certificates upon
> device enumeration, which happens in a subsys_initcall().
>
> Parsing X.509 certificates accesses the blacklist keyring:
> x509_cert_parse()
> x509_get_sig_params()
> is_hash_blacklisted()
> keyring_search()
>
> So far the keyring is created much later in a device_initcall(). Avoid
> a NULL pointer dereference on access to the keyring by creating it one
> initcall level earlier than PCI device enumeration, i.e. in an
> arch_initcall().
>
> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> certs/blacklist.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/certs/blacklist.c b/certs/blacklist.c
> index 675dd7a8f07a..34185415d451 100644
> --- a/certs/blacklist.c
> +++ b/certs/blacklist.c
> @@ -311,7 +311,7 @@ static int restrict_link_for_blacklist(struct key *dest_keyring,
> * Initialise the blacklist
> *
> * The blacklist_init() function is registered as an initcall via
> - * device_initcall(). As a result if the blacklist_init() function fails for
> + * arch_initcall(). As a result if the blacklist_init() function fails for
> * any reason the kernel continues to execute. While cleanly returning -ENODEV
> * could be acceptable for some non-critical kernel parts, if the blacklist
> * keyring fails to load it defeats the certificate/key based deny list for
> @@ -356,7 +356,7 @@ static int __init blacklist_init(void)
> /*
> * Must be initialised before we try and load the keys into the keyring.
> */
> -device_initcall(blacklist_init);
> +arch_initcall(blacklist_init);
>
> #ifdef CONFIG_SYSTEM_REVOCATION_LIST
> /*
>

Reviewed-by: Ilpo J?rvinen <[email protected]>

--
i.

2023-10-03 09:04:53

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH 11/12] PCI/CMA: Expose in sysfs whether devices are authenticated

On Thu, 28 Sep 2023, Lukas Wunner wrote:

> The PCI core has just been amended to authenticate CMA-capable devices
> on enumeration and store the result in an "authenticated" bit in struct
> pci_dev->spdm_state.
>
> Expose the bit to user space through an eponymous sysfs attribute.
>
> Allow user space to trigger reauthentication (e.g. after it has updated
> the CMA keyring) by writing to the sysfs attribute.
>
> Subject to further discussion, a future commit might add a user-defined
> policy to forbid driver binding to devices which failed authentication,
> similar to the "authorized" attribute for USB.
>
> Alternatively, authentication success might be signaled to user space
> through a uevent, whereupon it may bind a (blacklisted) driver.
> A uevent signaling authentication failure might similarly cause user
> space to unbind or outright remove the potentially malicious device.
>
> Traffic from devices which failed authentication could also be filtered
> through ACS I/O Request Blocking Enable (PCIe r6.1 sec 7.7.11.3) or
> through Link Disable (PCIe r6.1 sec 7.5.3.7). Unlike an IOMMU, that
> will not only protect the host, but also prevent malicious peer-to-peer
> traffic to other devices.

IMO it would be good to mention the DOE stuff also in the changelog (it's
currently only in the sysfs docs).

--
i.

> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> Documentation/ABI/testing/sysfs-bus-pci | 27 +++++++++
> drivers/pci/Kconfig | 3 +
> drivers/pci/Makefile | 1 +
> drivers/pci/cma-sysfs.c | 73 +++++++++++++++++++++++++
> drivers/pci/cma.c | 2 +
> drivers/pci/doe.c | 2 +
> drivers/pci/pci-sysfs.c | 3 +
> drivers/pci/pci.h | 1 +
> include/linux/pci.h | 2 +
> 9 files changed, 114 insertions(+)
> create mode 100644 drivers/pci/cma-sysfs.c
>
> diff --git a/Documentation/ABI/testing/sysfs-bus-pci b/Documentation/ABI/testing/sysfs-bus-pci
> index ecf47559f495..2ea9b8deffcc 100644
> --- a/Documentation/ABI/testing/sysfs-bus-pci
> +++ b/Documentation/ABI/testing/sysfs-bus-pci
> @@ -500,3 +500,30 @@ Description:
> console drivers from the device. Raw users of pci-sysfs
> resourceN attributes must be terminated prior to resizing.
> Success of the resizing operation is not guaranteed.
> +
> +What: /sys/bus/pci/devices/.../authenticated
> +Date: September 2023
> +Contact: Lukas Wunner <[email protected]>
> +Description:
> + This file contains 1 if the device authenticated successfully
> + with CMA-SPDM (PCIe r6.1 sec 6.31). It contains 0 if the
> + device failed authentication (and may thus be malicious).
> +
> + Writing anything to this file causes reauthentication.
> + That may be opportune after updating the .cma keyring.
> +
> + The file is not visible if authentication is unsupported
> + by the device.
> +
> + If the kernel could not determine whether authentication is
> + supported because memory was low or DOE communication with
> + the device was not working, the file is visible but accessing
> + it fails with error code ENOTTY.
> +
> + This prevents downgrade attacks where an attacker consumes
> + memory or disturbs DOE communication in order to create the
> + appearance that a device does not support authentication.
> +
> + The reason why authentication support could not be determined
> + is apparent from "dmesg". To probe for authentication support
> + again, exercise the "remove" and "rescan" attributes.
> diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
> index c9aa5253ac1f..51df3be3438e 100644
> --- a/drivers/pci/Kconfig
> +++ b/drivers/pci/Kconfig
> @@ -129,6 +129,9 @@ config PCI_CMA
> A PCI DOE mailbox is used as transport for DMTF SPDM based
> attestation, measurement and secure channel establishment.
>
> +config PCI_CMA_SYSFS
> + def_bool PCI_CMA && SYSFS
> +
> config PCI_DOE
> bool
>
> diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
> index a18812b8832b..612ae724cd2d 100644
> --- a/drivers/pci/Makefile
> +++ b/drivers/pci/Makefile
> @@ -35,6 +35,7 @@ obj-$(CONFIG_PCI_DOE) += doe.o
> obj-$(CONFIG_PCI_DYNAMIC_OF_NODES) += of_property.o
>
> obj-$(CONFIG_PCI_CMA) += cma.o cma-x509.o cma.asn1.o
> +obj-$(CONFIG_PCI_CMA_SYSFS) += cma-sysfs.o
> $(obj)/cma-x509.o: $(obj)/cma.asn1.h
> $(obj)/cma.asn1.o: $(obj)/cma.asn1.c $(obj)/cma.asn1.h
>
> diff --git a/drivers/pci/cma-sysfs.c b/drivers/pci/cma-sysfs.c
> new file mode 100644
> index 000000000000..b2d45f96601a
> --- /dev/null
> +++ b/drivers/pci/cma-sysfs.c
> @@ -0,0 +1,73 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Component Measurement and Authentication (CMA-SPDM, PCIe r6.1 sec 6.31)
> + *
> + * Copyright (C) 2023 Intel Corporation
> + */
> +
> +#include <linux/pci.h>
> +#include <linux/spdm.h>
> +#include <linux/sysfs.h>
> +
> +#include "pci.h"
> +
> +static ssize_t authenticated_store(struct device *dev,
> + struct device_attribute *attr,
> + const char *buf, size_t count)
> +{
> + struct pci_dev *pdev = to_pci_dev(dev);
> + ssize_t rc;
> +
> + if (!pdev->cma_capable &&
> + (pdev->cma_init_failed || pdev->doe_init_failed))
> + return -ENOTTY;
> +
> + rc = pci_cma_reauthenticate(pdev);
> + if (rc)
> + return rc;
> +
> + return count;
> +}
> +
> +static ssize_t authenticated_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + if (!pdev->cma_capable &&
> + (pdev->cma_init_failed || pdev->doe_init_failed))
> + return -ENOTTY;
> +
> + return sysfs_emit(buf, "%u\n", spdm_authenticated(pdev->spdm_state));
> +}
> +static DEVICE_ATTR_RW(authenticated);
> +
> +static struct attribute *pci_cma_attrs[] = {
> + &dev_attr_authenticated.attr,
> + NULL
> +};
> +
> +static umode_t pci_cma_attrs_are_visible(struct kobject *kobj,
> + struct attribute *a, int n)
> +{
> + struct device *dev = kobj_to_dev(kobj);
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + /*
> + * If CMA or DOE initialization failed, CMA attributes must be visible
> + * and return an error on access. This prevents downgrade attacks
> + * where an attacker disturbs memory allocation or DOE communication
> + * in order to create the appearance that CMA is unsupported.
> + * The attacker may achieve that by simply hogging memory.
> + */
> + if (!pdev->cma_capable &&
> + !pdev->cma_init_failed && !pdev->doe_init_failed)
> + return 0;
> +
> + return a->mode;
> +}
> +
> +const struct attribute_group pci_cma_attr_group = {
> + .attrs = pci_cma_attrs,
> + .is_visible = pci_cma_attrs_are_visible,
> +};
> diff --git a/drivers/pci/cma.c b/drivers/pci/cma.c
> index 89d23fdc37ec..c539ad85a28f 100644
> --- a/drivers/pci/cma.c
> +++ b/drivers/pci/cma.c
> @@ -52,6 +52,7 @@ void pci_cma_init(struct pci_dev *pdev)
> int rc;
>
> if (!pci_cma_keyring) {
> + pdev->cma_init_failed = true;
> return;
> }
>
> @@ -67,6 +68,7 @@ void pci_cma_init(struct pci_dev *pdev)
> PCI_DOE_MAX_PAYLOAD, pci_cma_keyring,
> pci_cma_validate);
> if (!pdev->spdm_state) {
> + pdev->cma_init_failed = true;
> return;
> }
>
> diff --git a/drivers/pci/doe.c b/drivers/pci/doe.c
> index 79f0336eb0c3..fabbda68edac 100644
> --- a/drivers/pci/doe.c
> +++ b/drivers/pci/doe.c
> @@ -686,6 +686,7 @@ void pci_doe_init(struct pci_dev *pdev)
> PCI_EXT_CAP_ID_DOE))) {
> doe_mb = pci_doe_create_mb(pdev, offset);
> if (IS_ERR(doe_mb)) {
> + pdev->doe_init_failed = true;
> pci_err(pdev, "[%x] failed to create mailbox: %ld\n",
> offset, PTR_ERR(doe_mb));
> continue;
> @@ -693,6 +694,7 @@ void pci_doe_init(struct pci_dev *pdev)
>
> rc = xa_insert(&pdev->doe_mbs, offset, doe_mb, GFP_KERNEL);
> if (rc) {
> + pdev->doe_init_failed = true;
> pci_err(pdev, "[%x] failed to insert mailbox: %d\n",
> offset, rc);
> pci_doe_destroy_mb(doe_mb);
> diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
> index d9eede2dbc0e..7024e08e1b9a 100644
> --- a/drivers/pci/pci-sysfs.c
> +++ b/drivers/pci/pci-sysfs.c
> @@ -1655,6 +1655,9 @@ static const struct attribute_group *pci_dev_attr_groups[] = {
> #endif
> #ifdef CONFIG_PCIEASPM
> &aspm_ctrl_attr_group,
> +#endif
> +#ifdef CONFIG_PCI_CMA_SYSFS
> + &pci_cma_attr_group,
> #endif
> NULL,
> };
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index 71092ccf4fbd..d80cc06be0cc 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -328,6 +328,7 @@ void pci_cma_destroy(struct pci_dev *pdev);
> int pci_cma_reauthenticate(struct pci_dev *pdev);
> struct x509_certificate;
> int pci_cma_validate(struct device *dev, struct x509_certificate *leaf_cert);
> +extern const struct attribute_group pci_cma_attr_group;
> #else
> static inline void pci_cma_init(struct pci_dev *pdev) { }
> static inline void pci_cma_destroy(struct pci_dev *pdev) { }
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 2bc11d8b567e..2c5fde81bb85 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -516,10 +516,12 @@ struct pci_dev {
> #endif
> #ifdef CONFIG_PCI_DOE
> struct xarray doe_mbs; /* Data Object Exchange mailboxes */
> + unsigned int doe_init_failed:1;
> #endif
> #ifdef CONFIG_PCI_CMA
> struct spdm_state *spdm_state; /* Security Protocol and Data Model */
> unsigned int cma_capable:1; /* Authentication supported */
> + unsigned int cma_init_failed:1;
> #endif
> u16 acs_cap; /* ACS Capability offset */
> phys_addr_t rom; /* Physical address if not from BAR */
>

2023-10-03 09:11:13

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 04/12] certs: Create blacklist keyring earlier

On Thu, 28 Sep 2023 19:32:32 +0200
Lukas Wunner <[email protected]> wrote:

> The upcoming support for PCI device authentication with CMA-SPDM
> (PCIe r6.1 sec 6.31) requires parsing X.509 certificates upon
> device enumeration, which happens in a subsys_initcall().
>
> Parsing X.509 certificates accesses the blacklist keyring:
> x509_cert_parse()
> x509_get_sig_params()
> is_hash_blacklisted()
> keyring_search()
>
> So far the keyring is created much later in a device_initcall(). Avoid
> a NULL pointer dereference on access to the keyring by creating it one
> initcall level earlier than PCI device enumeration, i.e. in an
> arch_initcall().
>
> Signed-off-by: Lukas Wunner <[email protected]>
Indeed seems like it needs to be before subsys_initcall so whilst
it feels a bit weird to do it in one named arch, I guess that's the best choice
available.

Reviewed-by: Jonathan Cameron <[email protected]>

> ---
> certs/blacklist.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/certs/blacklist.c b/certs/blacklist.c
> index 675dd7a8f07a..34185415d451 100644
> --- a/certs/blacklist.c
> +++ b/certs/blacklist.c
> @@ -311,7 +311,7 @@ static int restrict_link_for_blacklist(struct key *dest_keyring,
> * Initialise the blacklist
> *
> * The blacklist_init() function is registered as an initcall via
> - * device_initcall(). As a result if the blacklist_init() function fails for
> + * arch_initcall(). As a result if the blacklist_init() function fails for
> * any reason the kernel continues to execute. While cleanly returning -ENODEV
> * could be acceptable for some non-critical kernel parts, if the blacklist
> * keyring fails to load it defeats the certificate/key based deny list for
> @@ -356,7 +356,7 @@ static int __init blacklist_init(void)
> /*
> * Must be initialised before we try and load the keys into the keyring.
> */
> -device_initcall(blacklist_init);
> +arch_initcall(blacklist_init);
>
> #ifdef CONFIG_SYSTEM_REVOCATION_LIST
> /*

2023-10-03 09:13:30

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH 12/12] PCI/CMA: Grant guests exclusive control of authentication

On Thu, 28 Sep 2023, Lukas Wunner wrote:

> At any given time, only a single entity in a physical system may have
> an SPDM connection to a device. That's because the GET_VERSION request
> (which begins an authentication sequence) resets "the connection and all
> context associated with that connection" (SPDM 1.3.0 margin no 158).
>
> Thus, when a device is passed through to a guest and the guest has
> authenticated it, a subsequent authentication by the host would reset
> the device's CMA-SPDM session behind the guest's back.
>
> Prevent by letting the guest claim exclusive CMA ownership of the device
> during passthrough. Refuse CMA reauthentication on the host as long.

Is something missing after "as long" ? Perhaps "as long as the device is
passed through"?

Also "Prevent by" feels incomplete grammarwise, it begs a question prevent
what? Perhaps it's enough to start just with "Let the guest ..." as the
next sentence covers the prevent part anyway.

--
i.


> After passthrough has concluded, reauthenticate the device on the host.
>
> Store the flag indicating guest ownership in struct pci_dev's priv_flags
> to avoid the concurrency issues observed by commit 44bda4b7d26e ("PCI:
> Fix is_added/is_busmaster race condition").
>
> Side note: The Data Object Exchange r1.1 ECN (published Oct 11 2022)
> retrofits DOE with Connection IDs. In theory these allow simultaneous
> CMA-SPDM connections by multiple entities to the same device. But the
> first hardware generation capable of CMA-SPDM only supports DOE r1.0.
> The specification also neglects to reserve unique Connection IDs for
> hosts and guests, which further limits its usefulness.
>
> In general, forcing the transport to compensate for SPDM's lack of a
> connection identifier feels like a questionable layering violation.
>
> Signed-off-by: Lukas Wunner <[email protected]>
> Cc: Alex Williamson <[email protected]>
> ---
> drivers/pci/cma.c | 41 ++++++++++++++++++++++++++++++++
> drivers/pci/pci.h | 1 +
> drivers/vfio/pci/vfio_pci_core.c | 9 +++++--
> include/linux/pci.h | 8 +++++++
> include/linux/spdm.h | 2 ++
> lib/spdm_requester.c | 11 +++++++++
> 6 files changed, 70 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/pci/cma.c b/drivers/pci/cma.c
> index c539ad85a28f..b3eee137ffe2 100644
> --- a/drivers/pci/cma.c
> +++ b/drivers/pci/cma.c
> @@ -82,9 +82,50 @@ int pci_cma_reauthenticate(struct pci_dev *pdev)
> if (!pdev->cma_capable)
> return -ENOTTY;
>
> + if (test_bit(PCI_CMA_OWNED_BY_GUEST, &pdev->priv_flags))
> + return -EPERM;
> +
> return spdm_authenticate(pdev->spdm_state);
> }
>
> +#if IS_ENABLED(CONFIG_VFIO_PCI_CORE)
> +/**
> + * pci_cma_claim_ownership() - Claim exclusive CMA-SPDM control for guest VM
> + * @pdev: PCI device
> + *
> + * Claim exclusive CMA-SPDM control for a guest virtual machine before
> + * passthrough of @pdev. The host refrains from performing CMA-SPDM
> + * authentication of the device until passthrough has concluded.
> + *
> + * Necessary because the GET_VERSION request resets the SPDM connection
> + * and DOE r1.0 allows only a single SPDM connection for the entire system.
> + * So the host could reset the guest's SPDM connection behind the guest's back.
> + */
> +void pci_cma_claim_ownership(struct pci_dev *pdev)
> +{
> + set_bit(PCI_CMA_OWNED_BY_GUEST, &pdev->priv_flags);
> +
> + if (pdev->cma_capable)
> + spdm_await(pdev->spdm_state);
> +}
> +EXPORT_SYMBOL(pci_cma_claim_ownership);
> +
> +/**
> + * pci_cma_return_ownership() - Relinquish CMA-SPDM control to the host
> + * @pdev: PCI device
> + *
> + * Relinquish CMA-SPDM control to the host after passthrough of @pdev to a
> + * guest virtual machine has concluded.
> + */
> +void pci_cma_return_ownership(struct pci_dev *pdev)
> +{
> + clear_bit(PCI_CMA_OWNED_BY_GUEST, &pdev->priv_flags);
> +
> + pci_cma_reauthenticate(pdev);
> +}
> +EXPORT_SYMBOL(pci_cma_return_ownership);
> +#endif
> +
> void pci_cma_destroy(struct pci_dev *pdev)
> {
> if (pdev->spdm_state)
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index d80cc06be0cc..05ae6359b152 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -388,6 +388,7 @@ static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
> #define PCI_DEV_ADDED 0
> #define PCI_DPC_RECOVERED 1
> #define PCI_DPC_RECOVERING 2
> +#define PCI_CMA_OWNED_BY_GUEST 3
>
> static inline void pci_dev_assign_added(struct pci_dev *dev, bool added)
> {
> diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
> index 1929103ee59a..6f300664a342 100644
> --- a/drivers/vfio/pci/vfio_pci_core.c
> +++ b/drivers/vfio/pci/vfio_pci_core.c
> @@ -487,10 +487,12 @@ int vfio_pci_core_enable(struct vfio_pci_core_device *vdev)
> if (ret)
> goto out_power;
>
> + pci_cma_claim_ownership(pdev);
> +
> /* If reset fails because of the device lock, fail this path entirely */
> ret = pci_try_reset_function(pdev);
> if (ret == -EAGAIN)
> - goto out_disable_device;
> + goto out_cma_return;
>
> vdev->reset_works = !ret;
> pci_save_state(pdev);
> @@ -549,7 +551,8 @@ int vfio_pci_core_enable(struct vfio_pci_core_device *vdev)
> out_free_state:
> kfree(vdev->pci_saved_state);
> vdev->pci_saved_state = NULL;
> -out_disable_device:
> +out_cma_return:
> + pci_cma_return_ownership(pdev);
> pci_disable_device(pdev);
> out_power:
> if (!disable_idle_d3)
> @@ -678,6 +681,8 @@ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev)
>
> vfio_pci_dev_set_try_reset(vdev->vdev.dev_set);
>
> + pci_cma_return_ownership(pdev);
> +
> /* Put the pm-runtime usage counter acquired during enable */
> if (!disable_idle_d3)
> pm_runtime_put(&pdev->dev);
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 2c5fde81bb85..c14ea0e74fc4 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -2386,6 +2386,14 @@ static inline resource_size_t pci_iov_resource_size(struct pci_dev *dev, int res
> static inline void pci_vf_drivers_autoprobe(struct pci_dev *dev, bool probe) { }
> #endif
>
> +#ifdef CONFIG_PCI_CMA
> +void pci_cma_claim_ownership(struct pci_dev *pdev);
> +void pci_cma_return_ownership(struct pci_dev *pdev);
> +#else
> +static inline void pci_cma_claim_ownership(struct pci_dev *pdev) { }
> +static inline void pci_cma_return_ownership(struct pci_dev *pdev) { }
> +#endif
> +
> #if defined(CONFIG_HOTPLUG_PCI) || defined(CONFIG_HOTPLUG_PCI_MODULE)
> void pci_hp_create_module_link(struct pci_slot *pci_slot);
> void pci_hp_remove_module_link(struct pci_slot *pci_slot);
> diff --git a/include/linux/spdm.h b/include/linux/spdm.h
> index 69a83bc2eb41..d796127fbe9a 100644
> --- a/include/linux/spdm.h
> +++ b/include/linux/spdm.h
> @@ -34,6 +34,8 @@ int spdm_authenticate(struct spdm_state *spdm_state);
>
> bool spdm_authenticated(struct spdm_state *spdm_state);
>
> +void spdm_await(struct spdm_state *spdm_state);
> +
> void spdm_destroy(struct spdm_state *spdm_state);
>
> #endif
> diff --git a/lib/spdm_requester.c b/lib/spdm_requester.c
> index b2af2074ba6f..99424d6aebf5 100644
> --- a/lib/spdm_requester.c
> +++ b/lib/spdm_requester.c
> @@ -1483,6 +1483,17 @@ struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
> }
> EXPORT_SYMBOL_GPL(spdm_create);
>
> +/**
> + * spdm_await() - Wait for ongoing spdm_authenticate() to finish
> + *
> + * @spdm_state: SPDM session state
> + */
> +void spdm_await(struct spdm_state *spdm_state)
> +{
> + mutex_lock(&spdm_state->lock);
> + mutex_unlock(&spdm_state->lock);
> +}
> +
> /**
> * spdm_destroy() - Destroy SPDM session
> *
>

2023-10-03 10:36:22

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH 07/12] spdm: Introduce library to authenticate devices

On Thu, 28 Sep 2023, Lukas Wunner wrote:

> From: Jonathan Cameron <[email protected]>
>
> The Security Protocol and Data Model (SPDM) allows for authentication,
> measurement, key exchange and encrypted sessions with devices.
>
> A commonly used term for authentication and measurement is attestation.
>
> SPDM was conceived by the Distributed Management Task Force (DMTF).
> Its specification defines a request/response protocol spoken between
> host and attached devices over a variety of transports:
>
> https://www.dmtf.org/dsp/DSP0274
>
> This implementation supports SPDM 1.0 through 1.3 (the latest version).
> It is designed to be transport-agnostic as the kernel already supports
> two different SPDM-capable transports:
>
> * PCIe Data Object Exchange (PCIe r6.1 sec 6.30, drivers/pci/doe.c)
> * Management Component Transport Protocol (MCTP,
> Documentation/networking/mctp.rst)
>
> Use cases for SPDM include, but are not limited to:
>
> * PCIe Component Measurement and Authentication (PCIe r6.1 sec 6.31)
> * Compute Express Link (CXL r3.0 sec 14.11.6)
> * Open Compute Project (Attestation of System Components r1.0)
> https://www.opencompute.org/documents/attestation-v1-0-20201104-pdf
>
> The initial focus of this implementation is enabling PCIe CMA device
> authentication. As such, only a subset of the SPDM specification is
> contained herein, namely the request/response sequence GET_VERSION,
> GET_CAPABILITIES, NEGOTIATE_ALGORITHMS, GET_DIGESTS, GET_CERTIFICATE
> and CHALLENGE.
>
> A simple API is provided for subsystems wishing to authenticate devices:
> spdm_create(), spdm_authenticate() (can be called repeatedly for
> reauthentication) and spdm_destroy(). Certificates presented by devices
> are validated against an in-kernel keyring of trusted root certificates.
> A pointer to the keyring is passed to spdm_create().
>
> The set of supported cryptographic algorithms is limited to those
> declared mandatory in PCIe r6.1 sec 6.31.3. Adding more algorithms
> is straightforward as long as the crypto subsystem supports them.
>
> Future commits will extend this implementation with support for
> measurement, key exchange and encrypted sessions.
>
> So far, only the SPDM requester role is implemented. Care was taken to
> allow for effortless addition of the responder role at a later stage.
> This could be needed for a PCIe host bridge operating in endpoint mode.
> The responder role will be able to reuse struct definitions and helpers
> such as spdm_create_combined_prefix(). Those can be moved to
> spdm_common.{h,c} files upon introduction of the responder role.
> For now, all is kept in a single source file to avoid polluting the
> global namespace with unnecessary symbols.
>
> Credits: Jonathan wrote a proof-of-concept of this SPDM implementation.
> Lukas reworked it for upstream.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> MAINTAINERS | 9 +
> include/linux/spdm.h | 35 +
> lib/Kconfig | 15 +
> lib/Makefile | 2 +
> lib/spdm_requester.c | 1487 ++++++++++++++++++++++++++++++++++++++++++
> 5 files changed, 1548 insertions(+)
> create mode 100644 include/linux/spdm.h
> create mode 100644 lib/spdm_requester.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 90f13281d297..2591d2217d65 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -19299,6 +19299,15 @@ M: Security Officers <[email protected]>
> S: Supported
> F: Documentation/process/security-bugs.rst
>
> +SECURITY PROTOCOL AND DATA MODEL (SPDM)
> +M: Jonathan Cameron <[email protected]>
> +M: Lukas Wunner <[email protected]>
> +L: [email protected]
> +L: [email protected]
> +S: Maintained
> +F: include/linux/spdm.h
> +F: lib/spdm*
> +
> SECURITY SUBSYSTEM
> M: Paul Moore <[email protected]>
> M: James Morris <[email protected]>
> diff --git a/include/linux/spdm.h b/include/linux/spdm.h
> new file mode 100644
> index 000000000000..e824063793a7
> --- /dev/null
> +++ b/include/linux/spdm.h
> @@ -0,0 +1,35 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * DMTF Security Protocol and Data Model (SPDM)
> + * https://www.dmtf.org/dsp/DSP0274
> + *
> + * Copyright (C) 2021-22 Huawei
> + * Jonathan Cameron <[email protected]>
> + *
> + * Copyright (C) 2022-23 Intel Corporation
> + */
> +
> +#ifndef _SPDM_H_
> +#define _SPDM_H_
> +
> +#include <linux/types.h>
> +
> +struct key;
> +struct device;
> +struct spdm_state;
> +
> +typedef int (spdm_transport)(void *priv, struct device *dev,
> + const void *request, size_t request_sz,
> + void *response, size_t response_sz);

This returns a length or an error, right? If so return ssize_t instead.

If you make this change, alter the caller types too.

> +struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
> + void *transport_priv, u32 transport_sz,
> + struct key *keyring);
> +
> +int spdm_authenticate(struct spdm_state *spdm_state);
> +
> +bool spdm_authenticated(struct spdm_state *spdm_state);
> +
> +void spdm_destroy(struct spdm_state *spdm_state);
> +
> +#endif
> diff --git a/lib/Kconfig b/lib/Kconfig
> index c686f4adc124..3516cf1dad16 100644
> --- a/lib/Kconfig
> +++ b/lib/Kconfig
> @@ -764,3 +764,18 @@ config ASN1_ENCODER
>
> config POLYNOMIAL
> tristate
> +
> +config SPDM_REQUESTER
> + tristate
> + select KEYS
> + select ASYMMETRIC_KEY_TYPE
> + select ASYMMETRIC_PUBLIC_KEY_SUBTYPE
> + select X509_CERTIFICATE_PARSER
> + help
> + The Security Protocol and Data Model (SPDM) allows for authentication,
> + measurement, key exchange and encrypted sessions with devices. This
> + option enables support for the SPDM requester role.
> +
> + Crypto algorithms offered to SPDM responders are limited to those
> + enabled in .config. Drivers selecting SPDM_REQUESTER need to also
> + select any algorithms they deem mandatory.
> diff --git a/lib/Makefile b/lib/Makefile
> index 740109b6e2c8..d9ae58a9ca83 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -315,6 +315,8 @@ obj-$(CONFIG_PERCPU_TEST) += percpu_test.o
> obj-$(CONFIG_ASN1) += asn1_decoder.o
> obj-$(CONFIG_ASN1_ENCODER) += asn1_encoder.o
>
> +obj-$(CONFIG_SPDM_REQUESTER) += spdm_requester.o
> +
> obj-$(CONFIG_FONT_SUPPORT) += fonts/
>
> hostprogs := gen_crc32table
> diff --git a/lib/spdm_requester.c b/lib/spdm_requester.c
> new file mode 100644
> index 000000000000..407041036599
> --- /dev/null
> +++ b/lib/spdm_requester.c
> @@ -0,0 +1,1487 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * DMTF Security Protocol and Data Model (SPDM)
> + * https://www.dmtf.org/dsp/DSP0274
> + *
> + * Copyright (C) 2021-22 Huawei
> + * Jonathan Cameron <[email protected]>
> + *
> + * Copyright (C) 2022-23 Intel Corporation
> + */
> +
> +#define dev_fmt(fmt) "SPDM: " fmt
> +
> +#include <linux/dev_printk.h>
> +#include <linux/key.h>
> +#include <linux/module.h>
> +#include <linux/random.h>
> +#include <linux/spdm.h>
> +
> +#include <asm/unaligned.h>
> +#include <crypto/hash.h>
> +#include <crypto/public_key.h>
> +#include <keys/asymmetric-type.h>
> +#include <keys/x509-parser.h>
> +
> +/* SPDM versions supported by this implementation */
> +#define SPDM_MIN_VER 0x10
> +#define SPDM_MAX_VER 0x13
> +
> +#define SPDM_CACHE_CAP BIT(0) /* response only */
> +#define SPDM_CERT_CAP BIT(1)
> +#define SPDM_CHAL_CAP BIT(2)
> +#define SPDM_MEAS_CAP_MASK GENMASK(4, 3) /* response only */
> +#define SPDM_MEAS_CAP_NO 0 /* response only */
> +#define SPDM_MEAS_CAP_MEAS 1 /* response only */
> +#define SPDM_MEAS_CAP_MEAS_SIG 2 /* response only */
> +#define SPDM_MEAS_FRESH_CAP BIT(5) /* response only */
> +#define SPDM_ENCRYPT_CAP BIT(6)
> +#define SPDM_MAC_CAP BIT(7)
> +#define SPDM_MUT_AUTH_CAP BIT(8)
> +#define SPDM_KEY_EX_CAP BIT(9)
> +#define SPDM_PSK_CAP_MASK GENMASK(11, 10)
> +#define SPDM_PSK_CAP_NO 0
> +#define SPDM_PSK_CAP_PSK 1
> +#define SPDM_PSK_CAP_PSK_CTX 2 /* response only */
> +#define SPDM_ENCAP_CAP BIT(12)
> +#define SPDM_HBEAT_CAP BIT(13)
> +#define SPDM_KEY_UPD_CAP BIT(14)
> +#define SPDM_HANDSHAKE_ITC_CAP BIT(15)
> +#define SPDM_PUB_KEY_ID_CAP BIT(16)
> +#define SPDM_CHUNK_CAP BIT(17) /* 1.2 */
> +#define SPDM_ALIAS_CERT_CAP BIT(18) /* 1.2 response only */
> +#define SPDM_SET_CERT_CAP BIT(19) /* 1.2 response only */
> +#define SPDM_CSR_CAP BIT(20) /* 1.2 response only */
> +#define SPDM_CERT_INST_RESET_CAP BIT(21) /* 1.2 response only */
> +#define SPDM_EP_INFO_CAP_MASK GENMASK(23, 22) /* 1.3 */
> +#define SPDM_EP_INFO_CAP_NO 0 /* 1.3 */
> +#define SPDM_EP_INFO_CAP_RSP 1 /* 1.3 */
> +#define SPDM_EP_INFO_CAP_RSP_SIG 2 /* 1.3 */
> +#define SPDM_MEL_CAP BIT(24) /* 1.3 response only */
> +#define SPDM_EVENT_CAP BIT(25) /* 1.3 */
> +#define SPDM_MULTI_KEY_CAP_MASK GENMASK(27, 26) /* 1.3 */
> +#define SPDM_MULTI_KEY_CAP_NO 0 /* 1.3 */
> +#define SPDM_MULTI_KEY_CAP_ONLY 1 /* 1.3 */
> +#define SPDM_MULTI_KEY_CAP_SEL 2 /* 1.3 */
> +#define SPDM_GET_KEY_PAIR_INFO_CAP BIT(28) /* 1.3 response only */
> +#define SPDM_SET_KEY_PAIR_INFO_CAP BIT(29) /* 1.3 response only */
> +
> +/* SPDM capabilities supported by this implementation */
> +#define SPDM_CAPS (SPDM_CERT_CAP | SPDM_CHAL_CAP)
> +
> +/* SPDM capabilities required from responders */
> +#define SPDM_MIN_CAPS (SPDM_CERT_CAP | SPDM_CHAL_CAP)
> +
> +/*
> + * SPDM cryptographic timeout of this implementation:
> + * Assume calculations may take up to 1 sec on a busy machine, which equals
> + * roughly 1 << 20. That's within the limits mandated for responders by CMA
> + * (1 << 23 usec, PCIe r6.1 sec 6.31.3) and DOE (1 sec, PCIe r6.1 sec 6.30.2).
> + * Used in GET_CAPABILITIES exchange.
> + */
> +#define SPDM_CTEXPONENT 20
> +
> +#define SPDM_ASYM_RSASSA_2048 BIT(0)
> +#define SPDM_ASYM_RSAPSS_2048 BIT(1)
> +#define SPDM_ASYM_RSASSA_3072 BIT(2)
> +#define SPDM_ASYM_RSAPSS_3072 BIT(3)
> +#define SPDM_ASYM_ECDSA_ECC_NIST_P256 BIT(4)
> +#define SPDM_ASYM_RSASSA_4096 BIT(5)
> +#define SPDM_ASYM_RSAPSS_4096 BIT(6)
> +#define SPDM_ASYM_ECDSA_ECC_NIST_P384 BIT(7)
> +#define SPDM_ASYM_ECDSA_ECC_NIST_P521 BIT(8)
> +#define SPDM_ASYM_SM2_ECC_SM2_P256 BIT(9)
> +#define SPDM_ASYM_EDDSA_ED25519 BIT(10)
> +#define SPDM_ASYM_EDDSA_ED448 BIT(11)
> +
> +#define SPDM_HASH_SHA_256 BIT(0)
> +#define SPDM_HASH_SHA_384 BIT(1)
> +#define SPDM_HASH_SHA_512 BIT(2)
> +#define SPDM_HASH_SHA3_256 BIT(3)
> +#define SPDM_HASH_SHA3_384 BIT(4)
> +#define SPDM_HASH_SHA3_512 BIT(5)
> +#define SPDM_HASH_SM3_256 BIT(6)
> +
> +#if IS_ENABLED(CONFIG_CRYPTO_RSA)
> +#define SPDM_ASYM_RSA SPDM_ASYM_RSASSA_2048 | \
> + SPDM_ASYM_RSASSA_3072 | \
> + SPDM_ASYM_RSASSA_4096 |
> +#endif
> +
> +#if IS_ENABLED(CONFIG_CRYPTO_ECDSA)
> +#define SPDM_ASYM_ECDSA SPDM_ASYM_ECDSA_ECC_NIST_P256 | \
> + SPDM_ASYM_ECDSA_ECC_NIST_P384 |
> +#endif
> +
> +#if IS_ENABLED(CONFIG_CRYPTO_SHA256)
> +#define SPDM_HASH_SHA2_256 SPDM_HASH_SHA_256 |
> +#endif
> +
> +#if IS_ENABLED(CONFIG_CRYPTO_SHA512)
> +#define SPDM_HASH_SHA2_384_512 SPDM_HASH_SHA_384 | \
> + SPDM_HASH_SHA_512 |
> +#endif
> +
> +/* SPDM algorithms supported by this implementation */
> +#define SPDM_ASYM_ALGOS (SPDM_ASYM_RSA \
> + SPDM_ASYM_ECDSA 0)
> +
> +#define SPDM_HASH_ALGOS (SPDM_HASH_SHA2_256 \
> + SPDM_HASH_SHA2_384_512 0)
> +
> +/*
> + * Common header shared by all messages.
> + * Note that the meaning of param1 and param2 is message dependent.
> + */
> +struct spdm_header {
> + u8 version;
> + u8 code; /* RequestResponseCode */
> + u8 param1;
> + u8 param2;
> +} __packed;
> +
> +#define SPDM_REQ 0x80
> +#define SPDM_GET_VERSION 0x84

Align.

> +struct spdm_get_version_req {
> + u8 version;
> + u8 code;
> + u8 param1;
> + u8 param2;
> +} __packed;
> +
> +struct spdm_get_version_rsp {
> + u8 version;
> + u8 code;
> + u8 param1;
> + u8 param2;
> +
> + u8 reserved;
> + u8 version_number_entry_count;
> + __le16 version_number_entries[];

__counted_by(version_number_entry_count) ?

> +} __packed;
> +
> +#define SPDM_GET_CAPABILITIES 0xE1

There's non-capital hex later in the file, please try to be consistent.

> +#define SPDM_MIN_DATA_TRANSFER_SIZE 42 /* SPDM 1.2.0 margin no 226 */
> +
> +/* For this exchange the request and response messages have the same form */
> +struct spdm_get_capabilities_reqrsp {
> + u8 version;
> + u8 code;
> + u8 param1;
> + u8 param2;
> + /* End of SPDM 1.0 structure */
> +
> + u8 reserved1;
> + u8 ctexponent;
> + u16 reserved2;
> +
> + __le32 flags;
> + /* End of SPDM 1.1 structure */
> +
> + __le32 data_transfer_size; /* 1.2+ */
> + __le32 max_spdm_msg_size; /* 1.2+ */
> +} __packed;
> +
> +#define SPDM_NEGOTIATE_ALGS 0xE3
> +
> +struct spdm_negotiate_algs_req {
> + u8 version;
> + u8 code;
> + u8 param1; /* Number of ReqAlgStruct entries at end */
> + u8 param2;
> +
> + __le16 length;
> + u8 measurement_specification;
> + u8 other_params_support; /* 1.2+ */
> +
> + __le32 base_asym_algo;
> + __le32 base_hash_algo;
> +
> + u8 reserved1[12];
> + u8 ext_asym_count;
> + u8 ext_hash_count;
> + u8 reserved2;
> + u8 mel_specification; /* 1.3+ */
> +
> + /*
> + * Additional optional fields at end of this structure:
> + * - ExtAsym: 4 bytes * ext_asym_count
> + * - ExtHash: 4 bytes * ext_hash_count
> + * - ReqAlgStruct: variable size * param1 * 1.1+ *
> + */
> +} __packed;
> +
> +struct spdm_negotiate_algs_rsp {
> + u8 version;
> + u8 code;
> + u8 param1; /* Number of RespAlgStruct entries at end */
> + u8 param2;
> +
> + __le16 length;
> + u8 measurement_specification_sel;
> + u8 other_params_sel; /* 1.2+ */
> +
> + __le32 measurement_hash_algo;
> + __le32 base_asym_sel;
> + __le32 base_hash_sel;
> +
> + u8 reserved1[11];
> + u8 mel_specification_sel; /* 1.3+ */
> + u8 ext_asym_sel_count; /* Either 0 or 1 */
> + u8 ext_hash_sel_count; /* Either 0 or 1 */
> + u8 reserved2[2];
> +
> + /*
> + * Additional optional fields at end of this structure:
> + * - ExtAsym: 4 bytes * ext_asym_count
> + * - ExtHash: 4 bytes * ext_hash_count
> + * - RespAlgStruct: variable size * param1 * 1.1+ *
> + */
> +} __packed;
> +
> +struct spdm_req_alg_struct {
> + u8 alg_type;
> + u8 alg_count; /* 0x2K where K is number of alg_external entries */
> + __le16 alg_supported; /* Size is in alg_count[7:4], always 2 */
> + __le32 alg_external[];
> +} __packed;
> +
> +#define SPDM_GET_DIGESTS 0x81
> +
> +struct spdm_get_digests_req {
> + u8 version;
> + u8 code;
> + u8 param1; /* Reserved */
> + u8 param2; /* Reserved */
> +} __packed;
> +
> +struct spdm_get_digests_rsp {
> + u8 version;
> + u8 code;
> + u8 param1; /* SupportedSlotMask */ /* 1.3+ */
> + u8 param2; /* ProvisionedSlotMask */
> + u8 digests[]; /* Hash of struct spdm_cert_chain for each slot */
> + /* End of SPDM 1.2 structure */
> +
> + /*
> + * Additional optional fields at end of this structure:
> + * (omitted as long as we do not advertise MULTI_KEY_CAP)
> + * - KeyPairID: 1 byte for each slot * 1.3+ *
> + * - CertificateInfo: 1 byte for each slot * 1.3+ *
> + * - KeyUsageMask: 2 bytes for each slot * 1.3+ *
> + */
> +} __packed;
> +
> +#define SPDM_GET_CERTIFICATE 0x82
> +#define SPDM_SLOTS 8 /* SPDM 1.0.0 section 4.9.2.1 */
> +
> +struct spdm_get_certificate_req {
> + u8 version;
> + u8 code;
> + u8 param1; /* Slot number 0..7 */
> + u8 param2; /* SlotSizeRequested */ /* 1.3+ */
> + __le16 offset;
> + __le16 length;
> +} __packed;
> +
> +struct spdm_get_certificate_rsp {
> + u8 version;
> + u8 code;
> + u8 param1; /* Slot number 0..7 */
> + u8 param2; /* CertModel */ /* 1.3+ */
> + __le16 portion_length;
> + __le16 remainder_length;
> + u8 cert_chain[]; /* PortionLength long */
> +} __packed;
> +
> +struct spdm_cert_chain {
> + __le16 length;
> + u8 reserved[2];
> + /*
> + * Additional fields at end of this structure:
> + * - RootHash: Digest of Root Certificate
> + * - Certificates: Chain of ASN.1 DER-encoded X.509 v3 certificates
> + */
> +} __packed;
> +
> +#define SPDM_CHALLENGE 0x83
> +#define SPDM_MAX_OPAQUE_DATA 1024 /* SPDM 1.0.0 table 21 */
> +
> +struct spdm_challenge_req {
> + u8 version;
> + u8 code;
> + u8 param1; /* Slot number 0..7 */
> + u8 param2; /* MeasurementSummaryHash type */
> + u8 nonce[32];
> + /* End of SPDM 1.2 structure */
> +
> + u8 context[8]; /* 1.3+ */
> +} __packed;
> +
> +struct spdm_challenge_rsp {
> + u8 version;
> + u8 code;
> + u8 param1; /* Slot number 0..7 */
> + u8 param2; /* Slot mask */
> + /*
> + * Additional fields at end of this structure:
> + * - CertChainHash: Hash of struct spdm_cert_chain for selected slot
> + * - Nonce: 32 bytes long
> + * - MeasurementSummaryHash: Optional hash of selected measurements
> + * - OpaqueDataLength: 2 bytes long
> + * - OpaqueData: Up to 1024 bytes long
> + * - RequesterContext: 8 bytes long * 1.3+ *
> + * - Signature
> + */
> +} __packed;
> +
> +#define SPDM_ERROR 0x7f
> +
> +enum spdm_error_code {
> + spdm_invalid_request = 0x01,
> + spdm_invalid_session = 0x02, /* 1.1 only */
> + spdm_busy = 0x03,
> + spdm_unexpected_request = 0x04,
> + spdm_unspecified = 0x05,
> + spdm_decrypt_error = 0x06,
> + spdm_unsupported_request = 0x07,
> + spdm_request_in_flight = 0x08,
> + spdm_invalid_response_code = 0x09,
> + spdm_session_limit_exceeded = 0x0a,
> + spdm_session_required = 0x0b,
> + spdm_reset_required = 0x0c,
> + spdm_response_too_large = 0x0d,
> + spdm_request_too_large = 0x0e,
> + spdm_large_response = 0x0f,
> + spdm_message_lost = 0x10,
> + spdm_invalid_policy = 0x11, /* 1.3+ */
> + spdm_version_mismatch = 0x41,
> + spdm_response_not_ready = 0x42,
> + spdm_request_resynch = 0x43,
> + spdm_operation_failed = 0x44, /* 1.3+ */
> + spdm_no_pending_requests = 0x45, /* 1.3+ */
> + spdm_vendor_defined_error = 0xff,

Align values.

So SPDM_ERROR is in caps but the error codes aren't?

> +};
> +
> +struct spdm_error_rsp {
> + u8 version;
> + u8 code;
> + enum spdm_error_code error_code:8;

Is this always going to produce the layout you want given the alignment
requirements for the storage unit for u8 and enum are probably different?

> + u8 error_data;
> +
> + u8 extended_error_data[];
> +} __packed;
> +
> +static int spdm_err(struct device *dev, struct spdm_error_rsp *rsp)
> +{
> + switch (rsp->error_code) {
> + case spdm_invalid_request:
> + dev_err(dev, "Invalid request\n");
> + return -EINVAL;
> + case spdm_invalid_session:
> + if (rsp->version == 0x11) {
> + dev_err(dev, "Invalid session %#x\n", rsp->error_data);
> + return -EINVAL;
> + }
> + break;
> + case spdm_busy:
> + dev_err(dev, "Busy\n");
> + return -EBUSY;
> + case spdm_unexpected_request:
> + dev_err(dev, "Unexpected request\n");
> + return -EINVAL;
> + case spdm_unspecified:
> + dev_err(dev, "Unspecified error\n");
> + return -EINVAL;
> + case spdm_decrypt_error:
> + dev_err(dev, "Decrypt error\n");
> + return -EIO;
> + case spdm_unsupported_request:
> + dev_err(dev, "Unsupported request %#x\n", rsp->error_data);
> + return -EINVAL;
> + case spdm_request_in_flight:
> + dev_err(dev, "Request in flight\n");
> + return -EINVAL;
> + case spdm_invalid_response_code:
> + dev_err(dev, "Invalid response code\n");
> + return -EINVAL;
> + case spdm_session_limit_exceeded:
> + dev_err(dev, "Session limit exceeded\n");
> + return -EBUSY;
> + case spdm_session_required:
> + dev_err(dev, "Session required\n");
> + return -EINVAL;
> + case spdm_reset_required:
> + dev_err(dev, "Reset required\n");
> + return -ERESTART;

Is it really good to use this return code? Isn't there even some special
handling for it, hopefully it never leaks to anything that will take it as
special.

If these occur (this and the one below) after there was an existing
session, -EPIPE would be one potential alternative which kinda matches
what's going on. If that's not acceptable perhaps some connection oriented
return codes would be close enough (session is conceptually close to
connection anyway).

> + case spdm_response_too_large:
> + dev_err(dev, "Response too large\n");
> + return -EINVAL;
> + case spdm_request_too_large:
> + dev_err(dev, "Request too large\n");
> + return -EINVAL;
> + case spdm_large_response:
> + dev_err(dev, "Large response\n");
> + return -EMSGSIZE;
> + case spdm_message_lost:
> + dev_err(dev, "Message lost\n");
> + return -EIO;
> + case spdm_invalid_policy:
> + dev_err(dev, "Invalid policy\n");
> + return -EINVAL;
> + case spdm_version_mismatch:
> + dev_err(dev, "Version mismatch\n");
> + return -EINVAL;
> + case spdm_response_not_ready:
> + dev_err(dev, "Response not ready\n");
> + return -EINPROGRESS;
> + case spdm_request_resynch:
> + dev_err(dev, "Request resynchronization\n");
> + return -ERESTART;
> + case spdm_operation_failed:
> + dev_err(dev, "Operation failed\n");
> + return -EINVAL;
> + case spdm_no_pending_requests:
> + return -ENOENT;
> + case spdm_vendor_defined_error:
> + dev_err(dev, "Vendor defined error\n");
> + return -EINVAL;
> + }
> +
> + dev_err(dev, "Undefined error %#x\n", rsp->error_code);
> + return -EINVAL;
> +}
> +
> +/**
> + * struct spdm_state - SPDM session state
> + *
> + * @lock: Serializes multiple concurrent spdm_authenticate() calls.
> + * @authenticated: Whether device was authenticated successfully.
> + * @dev: Transport device. Used for error reporting and passed to @transport.
> + * @transport: Transport function to perform one message exchange.
> + * @transport_priv: Transport private data.
> + * @transport_sz: Maximum message size the transport is capable of (in bytes).
> + * Used as DataTransferSize in GET_CAPABILITIES exchange.
> + * @version: Maximum common supported version of requester and responder.
> + * Negotiated during GET_VERSION exchange.
> + * @responder_caps: Cached capabilities of responder.
> + * Received during GET_CAPABILITIES exchange.
> + * @base_asym_alg: Asymmetric key algorithm for signature verification of
> + * CHALLENGE_AUTH messages.
> + * Selected by responder during NEGOTIATE_ALGORITHMS exchange.
> + * @base_hash_alg: Hash algorithm for signature verification of
> + * CHALLENGE_AUTH messages.
> + * Selected by responder during NEGOTIATE_ALGORITHMS exchange.
> + * @slot_mask: Bitmask of populated certificate slots in the responder.
> + * Received during GET_DIGESTS exchange.
> + * @base_asym_enc: Human-readable name of @base_asym_alg's signature encoding.
> + * Passed to crypto subsystem when calling verify_signature().
> + * @s: Signature length of @base_asym_alg (in bytes). S or SigLen in SPDM
> + * specification.
> + * @base_hash_alg_name: Human-readable name of @base_hash_alg.
> + * Passed to crypto subsystem when calling crypto_alloc_shash() and
> + * verify_signature().
> + * @shash: Synchronous hash handle for @base_hash_alg computation.
> + * @desc: Synchronous hash context for @base_hash_alg computation.
> + * @h: Hash length of @base_hash_alg (in bytes). H in SPDM specification.
> + * @leaf_key: Public key portion of leaf certificate against which to check
> + * responder's signatures.
> + * @root_keyring: Keyring against which to check the first certificate in
> + * responder's certificate chain.
> + */
> +struct spdm_state {
> + struct mutex lock;
> + unsigned int authenticated:1;
> +
> + /* Transport */
> + struct device *dev;
> + spdm_transport *transport;
> + void *transport_priv;
> + u32 transport_sz;
> +
> + /* Negotiated state */
> + u8 version;
> + u32 responder_caps;
> + u32 base_asym_alg;
> + u32 base_hash_alg;
> + unsigned long slot_mask;
> +
> + /* Signature algorithm */
> + const char *base_asym_enc;
> + size_t s;
> +
> + /* Hash algorithm */
> + const char *base_hash_alg_name;
> + struct crypto_shash *shash;
> + struct shash_desc *desc;
> + size_t h;

I understand this h and s come directly from the naming in the spec but it
feels unnecessarily obfuscated from code reading PoV to not use hash_len
and sig_len.

> +
> + /* Certificates */
> + struct public_key *leaf_key;
> + struct key *root_keyring;
> +};
> +
> +static int __spdm_exchange(struct spdm_state *spdm_state,
> + const void *req, size_t req_sz,
> + void *rsp, size_t rsp_sz)
> +{
> + const struct spdm_header *request = req;
> + struct spdm_header *response = rsp;
> + int length;
> + int rc;
> +
> + rc = spdm_state->transport(spdm_state->transport_priv, spdm_state->dev,
> + req, req_sz, rsp, rsp_sz);
> + if (rc < 0)
> + return rc;
> +
> + length = rc;

rc feels pretty unnecessary variable here.

> + if (length < sizeof(struct spdm_header))
> + return -EPROTO;
> +
> + if (response->code == SPDM_ERROR)
> + return spdm_err(spdm_state->dev, (struct spdm_error_rsp *)rsp);
> +
> + if (response->code != (request->code & ~SPDM_REQ)) {
> + dev_err(spdm_state->dev,
> + "Response code %#x does not match request code %#x\n",
> + response->code, request->code);
> + return -EPROTO;
> + }
> +
> + return length;
> +}
> +
> +static int spdm_exchange(struct spdm_state *spdm_state,
> + void *req, size_t req_sz, void *rsp, size_t rsp_sz)
> +{
> + struct spdm_header *req_header = req;
> +
> + if (req_sz < sizeof(struct spdm_header) ||
> + rsp_sz < sizeof(struct spdm_header))

Variable names that close to each other seem like a disaster awaiting to
happen. Even changing rsp -> resp would be a huge improvement.

> + return -EINVAL;
> +
> + req_header->version = spdm_state->version;
> +
> + return __spdm_exchange(spdm_state, req, req_sz, rsp, rsp_sz);
> +}
> +
> +static const struct spdm_get_version_req spdm_get_version_req = {
> + .version = 0x10,
> + .code = SPDM_GET_VERSION,
> +};
> +
> +static int spdm_get_version(struct spdm_state *spdm_state,
> + struct spdm_get_version_rsp *rsp, size_t *rsp_sz)
> +{
> + u8 version = SPDM_MIN_VER;
> + bool foundver = false;
> + int rc, length, i;
> +
> + /*
> + * Bypass spdm_exchange() to be able to set version = 0x10.
> + * rsp buffer is large enough for the maximum possible 255 entries.
> + */
> + rc = __spdm_exchange(spdm_state, &spdm_get_version_req,
> + sizeof(spdm_get_version_req), rsp,
> + struct_size(rsp, version_number_entries, 255));
> + if (rc < 0)
> + return rc;
> +
> + length = rc;
> + if (length < sizeof(*rsp) ||
> + length < struct_size(rsp, version_number_entries,
> + rsp->version_number_entry_count)) {
> + dev_err(spdm_state->dev, "Truncated version response\n");
> + return -EIO;
> + }
> +
> + for (i = 0; i < rsp->version_number_entry_count; i++) {
> + u8 ver = get_unaligned_le16(&rsp->version_number_entries[i]) >> 8;

Name field you're after with #define and use FIELD_GET() here?

> +
> + if (ver >= version && ver <= SPDM_MAX_VER) {
> + foundver = true;
> + version = ver;
> + }
> + }
> + if (!foundver) {
> + dev_err(spdm_state->dev, "No common supported version\n");
> + return -EPROTO;
> + }
> + spdm_state->version = version;
> +
> + *rsp_sz = struct_size(rsp, version_number_entries,
> + rsp->version_number_entry_count);
> +
> + return 0;
> +}
> +
> +static int spdm_get_capabilities(struct spdm_state *spdm_state,
> + struct spdm_get_capabilities_reqrsp *req,
> + size_t *reqrsp_sz)
> +{
> + struct spdm_get_capabilities_reqrsp *rsp;
> + size_t req_sz;
> + size_t rsp_sz;
> + int rc, length;
> +
> + req->code = SPDM_GET_CAPABILITIES;
> + req->ctexponent = SPDM_CTEXPONENT;
> + req->flags = cpu_to_le32(SPDM_CAPS);
> +
> + if (spdm_state->version == 0x10) {
> + req_sz = offsetof(typeof(*req), reserved1);
> + rsp_sz = offsetof(typeof(*rsp), data_transfer_size);
> + } else if (spdm_state->version == 0x11) {
> + req_sz = offsetof(typeof(*req), data_transfer_size);
> + rsp_sz = offsetof(typeof(*rsp), data_transfer_size);
> + } else {
> + req_sz = sizeof(*req);
> + rsp_sz = sizeof(*rsp);
> + req->data_transfer_size = cpu_to_le32(spdm_state->transport_sz);
> + req->max_spdm_msg_size = cpu_to_le32(spdm_state->transport_sz);
> + }

Use switch?

> +
> + rsp = (void *)req + req_sz;

It would be more logical (and not require relying on C extension) to cast
to u8 * but that will then require another cast.

> +
> + rc = spdm_exchange(spdm_state, req, req_sz, rsp, rsp_sz);
> + if (rc < 0)
> + return rc;
> +
> + length = rc;
> + if (length < rsp_sz) {
> + dev_err(spdm_state->dev, "Truncated capabilities response\n");
> + return -EIO;
> + }
> +
> + spdm_state->responder_caps = le32_to_cpu(rsp->flags);

Earlier, unaligned accessors where used with the version_number_entries.
Is it intentional they're not used here (I cannot see what would be
reason for this difference)?

> + if ((spdm_state->responder_caps & SPDM_MIN_CAPS) != SPDM_MIN_CAPS)
> + return -EPROTONOSUPPORT;
> +
> + if (spdm_state->version >= 0x12) {
> + u32 data_transfer_size = le32_to_cpu(rsp->data_transfer_size);
> + if (data_transfer_size < SPDM_MIN_DATA_TRANSFER_SIZE) {
> + dev_err(spdm_state->dev,
> + "Malformed capabilities response\n");
> + return -EPROTO;
> + }
> + spdm_state->transport_sz = min(spdm_state->transport_sz,
> + data_transfer_size);
> + }
> +
> + *reqrsp_sz += req_sz + rsp_sz;

Would just total_sz of something along those lines do?

> +
> + return 0;
> +}
> +
> +/**
> + * spdm_start_hash() - Build first part of CHALLENGE_AUTH hash
> + *
> + * @spdm_state: SPDM session state
> + * @transcript: GET_VERSION request and GET_CAPABILITIES request and response
> + * @transcript_sz: length of @transcript
> + * @req: NEGOTIATE_ALGORITHMS request
> + * @req_sz: length of @req
> + * @rsp: ALGORITHMS response
> + * @rsp_sz: length of @rsp
> + *
> + * We've just learned the hash algorithm to use for CHALLENGE_AUTH signature
> + * verification. Hash the GET_VERSION and GET_CAPABILITIES exchanges which
> + * have been stashed in @transcript, as well as the NEGOTIATE_ALGORITHMS
> + * exchange which has just been performed. Subsequent requests and responses
> + * will be added to the hash as they become available.
> + *
> + * Return 0 on success or a negative errno.
> + */
> +static int spdm_start_hash(struct spdm_state *spdm_state,
> + void *transcript, size_t transcript_sz,
> + void *req, size_t req_sz, void *rsp, size_t rsp_sz)
> +{
> + int rc;
> +
> + spdm_state->shash = crypto_alloc_shash(spdm_state->base_hash_alg_name,
> + 0, 0);
> + if (!spdm_state->shash)
> + return -ENOMEM;
> +
> + spdm_state->desc = kzalloc(sizeof(*spdm_state->desc) +
> + crypto_shash_descsize(spdm_state->shash),
> + GFP_KERNEL);
> + if (!spdm_state->desc)
> + return -ENOMEM;
> +
> + spdm_state->desc->tfm = spdm_state->shash;
> +
> + /* Used frequently to compute offsets, so cache H */
> + spdm_state->h = crypto_shash_digestsize(spdm_state->shash);
> +
> + rc = crypto_shash_init(spdm_state->desc);
> + if (rc)
> + return rc;

Leak spdm_state->desc on error? (Similarly the returns below.)

> +
> + rc = crypto_shash_update(spdm_state->desc,
> + (u8 *)&spdm_get_version_req,
> + sizeof(spdm_get_version_req));
> + if (rc)
> + return rc;
> +
> + rc = crypto_shash_update(spdm_state->desc,
> + (u8 *)transcript, transcript_sz);
> + if (rc)
> + return rc;
> +
> + rc = crypto_shash_update(spdm_state->desc, (u8 *)req, req_sz);
> + if (rc)
> + return rc;
> +
> + rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp, rsp_sz);
> +
> + return rc;
> +}
> +
> +static int spdm_parse_algs(struct spdm_state *spdm_state)
> +{
> + switch (spdm_state->base_asym_alg) {
> + case SPDM_ASYM_RSASSA_2048:
> + spdm_state->s = 256;
> + spdm_state->base_asym_enc = "pkcs1";
> + break;
> + case SPDM_ASYM_RSASSA_3072:
> + spdm_state->s = 384;
> + spdm_state->base_asym_enc = "pkcs1";
> + break;
> + case SPDM_ASYM_RSASSA_4096:
> + spdm_state->s = 512;
> + spdm_state->base_asym_enc = "pkcs1";
> + break;
> + case SPDM_ASYM_ECDSA_ECC_NIST_P256:
> + spdm_state->s = 64;
> + spdm_state->base_asym_enc = "p1363";
> + break;
> + case SPDM_ASYM_ECDSA_ECC_NIST_P384:
> + spdm_state->s = 96;
> + spdm_state->base_asym_enc = "p1363";
> + break;
> + default:
> + dev_err(spdm_state->dev, "Unknown asym algorithm\n");
> + return -EINVAL;
> + }
> +
> + switch (spdm_state->base_hash_alg) {
> + case SPDM_HASH_SHA_256:
> + spdm_state->base_hash_alg_name = "sha256";
> + break;
> + case SPDM_HASH_SHA_384:
> + spdm_state->base_hash_alg_name = "sha384";
> + break;
> + case SPDM_HASH_SHA_512:
> + spdm_state->base_hash_alg_name = "sha512";
> + break;
> + default:
> + dev_err(spdm_state->dev, "Unknown hash algorithm\n");
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +static int spdm_negotiate_algs(struct spdm_state *spdm_state,
> + void *transcript, size_t transcript_sz)
> +{
> + struct spdm_req_alg_struct *req_alg_struct;
> + struct spdm_negotiate_algs_req *req;
> + struct spdm_negotiate_algs_rsp *rsp;
> + size_t req_sz = sizeof(*req);
> + size_t rsp_sz = sizeof(*rsp);
> + int rc, length;
> +
> + /* Request length shall be <= 128 bytes (SPDM 1.1.0 margin no 185) */
> + BUILD_BUG_ON(req_sz > 128);

I don't know why this really has to be here? This could be static_assert()
below the struct declaration.

> + req = kzalloc(req_sz, GFP_KERNEL);
> + if (!req)
> + return -ENOMEM;
> +
> + req->code = SPDM_NEGOTIATE_ALGS;
> + req->length = cpu_to_le16(req_sz);
> + req->base_asym_algo = cpu_to_le32(SPDM_ASYM_ALGOS);
> + req->base_hash_algo = cpu_to_le32(SPDM_HASH_ALGOS);
> +
> + rsp = kzalloc(rsp_sz, GFP_KERNEL);
> + if (!rsp) {
> + rc = -ENOMEM;
> + goto err_free_req;
> + }
> +
> + rc = spdm_exchange(spdm_state, req, req_sz, rsp, rsp_sz);
> + if (rc < 0)
> + goto err_free_rsp;
> +
> + length = rc;
> + if (length < sizeof(*rsp) ||
> + length < sizeof(*rsp) + rsp->param1 * sizeof(*req_alg_struct)) {
> + dev_err(spdm_state->dev, "Truncated algorithms response\n");
> + rc = -EIO;
> + goto err_free_rsp;
> + }
> +
> + spdm_state->base_asym_alg =
> + le32_to_cpu(rsp->base_asym_sel) & SPDM_ASYM_ALGOS;
> + spdm_state->base_hash_alg =
> + le32_to_cpu(rsp->base_hash_sel) & SPDM_HASH_ALGOS;
> +
> + /* Responder shall select exactly 1 alg (SPDM 1.0.0 table 14) */
> + if (hweight32(spdm_state->base_asym_alg) != 1 ||
> + hweight32(spdm_state->base_hash_alg) != 1 ||
> + rsp->ext_asym_sel_count != 0 ||
> + rsp->ext_hash_sel_count != 0 ||
> + rsp->param1 > req->param1) {
> + dev_err(spdm_state->dev, "Malformed algorithms response\n");
> + rc = -EPROTO;
> + goto err_free_rsp;
> + }
> +
> + rc = spdm_parse_algs(spdm_state);
> + if (rc)
> + goto err_free_rsp;
> +
> + /*
> + * If request contained a ReqAlgStruct not supported by responder,
> + * the corresponding RespAlgStruct may be omitted in response.
> + * Calculate the actual (possibly shorter) response length:
> + */
> + rsp_sz = sizeof(*rsp) + rsp->param1 * sizeof(*req_alg_struct);
> +
> + rc = spdm_start_hash(spdm_state, transcript, transcript_sz,
> + req, req_sz, rsp, rsp_sz);
> +
> +err_free_rsp:
> + kfree(rsp);
> +err_free_req:
> + kfree(req);
> +
> + return rc;
> +}
> +
> +static int spdm_get_digests(struct spdm_state *spdm_state)
> +{
> + struct spdm_get_digests_req req = { .code = SPDM_GET_DIGESTS };
> + struct spdm_get_digests_rsp *rsp;
> + size_t rsp_sz;
> + int rc, length;
> +
> + /*
> + * Assume all 8 slots are populated. We know the hash length (and thus
> + * the response size) because the responder only returns digests for
> + * the hash algorithm selected during the NEGOTIATE_ALGORITHMS exchange
> + * (SPDM 1.1.2 margin no 206).
> + */
> + rsp_sz = sizeof(*rsp) + SPDM_SLOTS * spdm_state->h;
> + rsp = kzalloc(rsp_sz, GFP_KERNEL);
> + if (!rsp)
> + return -ENOMEM;
> +
> + rc = spdm_exchange(spdm_state, &req, sizeof(req), rsp, rsp_sz);
> + if (rc < 0)
> + goto err_free_rsp;
> +
> + length = rc;
> + if (length < sizeof(*rsp) ||
> + length < sizeof(*rsp) + hweight8(rsp->param2) * spdm_state->h) {
> + dev_err(spdm_state->dev, "Truncated digests response\n");
> + rc = -EIO;
> + goto err_free_rsp;
> + }
> +
> + rsp_sz = sizeof(*rsp) + hweight8(rsp->param2) * spdm_state->h;
> +
> + /*
> + * Authentication-capable endpoints must carry at least 1 cert chain
> + * (SPDM 1.0.0 section 4.9.2.1).
> + */
> + spdm_state->slot_mask = rsp->param2;
> + if (!spdm_state->slot_mask) {
> + dev_err(spdm_state->dev, "No certificates provisioned\n");
> + rc = -EPROTO;
> + goto err_free_rsp;
> + }
> +
> + rc = crypto_shash_update(spdm_state->desc, (u8 *)&req, sizeof(req));
> + if (rc)
> + goto err_free_rsp;
> +
> + rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp, rsp_sz);
> +
> +err_free_rsp:
> + kfree(rsp);
> +
> + return rc;
> +}
> +
> +static int spdm_validate_cert_chain(struct spdm_state *spdm_state, u8 slot,
> + u8 *certs, size_t total_length)
> +{
> + struct x509_certificate *cert, *prev = NULL;
> + bool is_leaf_cert;
> + size_t offset = 0;
> + struct key *key;
> + int rc, length;
> +
> + while (offset < total_length) {
> + rc = x509_get_certificate_length(certs + offset,
> + total_length - offset);
> + if (rc < 0) {
> + dev_err(spdm_state->dev, "Invalid certificate length "
> + "at slot %u offset %zu\n", slot, offset);
> + goto err_free_prev;
> + }
> +
> + length = rc;
> + is_leaf_cert = offset + length == total_length;
> +
> + cert = x509_cert_parse(certs + offset, length);
> + if (IS_ERR(cert)) {
> + rc = PTR_ERR(cert);
> + dev_err(spdm_state->dev, "Certificate parse error %d "
> + "at slot %u offset %zu\n", rc, slot, offset);
> + goto err_free_prev;
> + }
> + if ((is_leaf_cert ==
> + test_bit(KEY_EFLAG_CA, &cert->pub->key_eflags)) ||
> + (is_leaf_cert &&
> + !test_bit(KEY_EFLAG_DIGITALSIG, &cert->pub->key_eflags))) {
> + rc = -EKEYREJECTED;
> + dev_err(spdm_state->dev, "Malformed certificate "
> + "at slot %u offset %zu\n", slot, offset);
> + goto err_free_cert;
> + }
> + if (cert->unsupported_sig) {
> + rc = -EKEYREJECTED;
> + dev_err(spdm_state->dev, "Unsupported signature "
> + "at slot %u offset %zu\n", slot, offset);
> + goto err_free_cert;
> + }
> + if (cert->blacklisted) {
> + rc = -EKEYREJECTED;
> + goto err_free_cert;
> + }
> +
> + if (!prev) {
> + /* First cert in chain, check against root_keyring */
> + key = find_asymmetric_key(spdm_state->root_keyring,
> + cert->sig->auth_ids[0],
> + cert->sig->auth_ids[1],
> + cert->sig->auth_ids[2],
> + false);
> + if (IS_ERR(key)) {
> + dev_info(spdm_state->dev, "Root certificate "
> + "for slot %u not found in %s "
> + "keyring: %s\n", slot,
> + spdm_state->root_keyring->description,
> + cert->issuer);
> + rc = PTR_ERR(key);
> + goto err_free_cert;
> + }
> +
> + rc = verify_signature(key, cert->sig);
> + key_put(key);
> + } else {
> + /* Subsequent cert in chain, check against previous */
> + rc = public_key_verify_signature(prev->pub, cert->sig);
> + }
> +
> + if (rc) {
> + dev_err(spdm_state->dev, "Signature validation error "
> + "%d at slot %u offset %zu\n", rc, slot, offset);
> + goto err_free_cert;
> + }
> +
> + x509_free_certificate(prev);
> + offset += length;
> + prev = cert;
> + }
> +
> + prev = NULL;
> + spdm_state->leaf_key = cert->pub;
> + cert->pub = NULL;
> +
> +err_free_cert:
> + x509_free_certificate(cert);
> +err_free_prev:
> + x509_free_certificate(prev);
> + return rc;
> +}
> +
> +static int spdm_get_certificate(struct spdm_state *spdm_state, u8 slot)
> +{
> + struct spdm_get_certificate_req req = {
> + .code = SPDM_GET_CERTIFICATE,
> + .param1 = slot,
> + };
> + struct spdm_get_certificate_rsp *rsp;
> + struct spdm_cert_chain *certs = NULL;
> + size_t rsp_sz, total_length, header_length;
> + u16 remainder_length = 0xffff;

0xffff in this function should use either U16_MAX or SZ_64K - 1.

> + u16 portion_length;
> + u16 offset = 0;
> + int rc, length;
> +
> + /*
> + * It is legal for the responder to send more bytes than requested.
> + * (Note the "should" in SPDM 1.0.0 table 19.) If we allocate a
> + * too small buffer, we can't calculate the hash over the (truncated)
> + * response. Only choice is thus to allocate the maximum possible 64k.
> + */
> + rsp_sz = min_t(u32, sizeof(*rsp) + 0xffff, spdm_state->transport_sz);
> + rsp = kvmalloc(rsp_sz, GFP_KERNEL);
> + if (!rsp)
> + return -ENOMEM;
> +
> + do {
> + /*
> + * If transport_sz is sufficiently large, first request will be
> + * for offset 0 and length 0xffff, which means entire cert
> + * chain (SPDM 1.0.0 table 18).
> + */
> + req.offset = cpu_to_le16(offset);
> + req.length = cpu_to_le16(min_t(size_t, remainder_length,
> + rsp_sz - sizeof(*rsp)));
> +
> + rc = spdm_exchange(spdm_state, &req, sizeof(req), rsp, rsp_sz);
> + if (rc < 0)
> + goto err_free_certs;
> +
> + length = rc;
> + if (length < sizeof(*rsp) ||
> + length < sizeof(*rsp) + le16_to_cpu(rsp->portion_length)) {
> + dev_err(spdm_state->dev,
> + "Truncated certificate response\n");
> + rc = -EIO;
> + goto err_free_certs;
> + }
> +
> + portion_length = le16_to_cpu(rsp->portion_length);
> + remainder_length = le16_to_cpu(rsp->remainder_length);
> +
> + /*
> + * On first response we learn total length of cert chain.
> + * Should portion_length + remainder_length exceed 0xffff,
> + * the min() ensures that the malformed check triggers below.
> + */
> + if (!certs) {
> + total_length = min(portion_length + remainder_length,
> + 0xffff);
> + certs = kvmalloc(total_length, GFP_KERNEL);
> + if (!certs) {
> + rc = -ENOMEM;
> + goto err_free_certs;
> + }
> + }
> +
> + if (!portion_length ||
> + (rsp->param1 & 0xf) != slot ||

Name the field with #define?

> + offset + portion_length + remainder_length != total_length)
> + {
> + dev_err(spdm_state->dev,
> + "Malformed certificate response\n");
> + rc = -EPROTO;
> + goto err_free_certs;
> + }
> +
> + memcpy((u8 *)certs + offset, rsp->cert_chain, portion_length);
> + offset += portion_length;
> +
> + rc = crypto_shash_update(spdm_state->desc, (u8 *)&req,
> + sizeof(req));
> + if (rc)
> + goto err_free_certs;
> +
> + rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp,
> + sizeof(*rsp) + portion_length);
> + if (rc)
> + goto err_free_certs;
> +
> + } while (remainder_length > 0);
> +
> + header_length = sizeof(struct spdm_cert_chain) + spdm_state->h;
> +
> + if (total_length < header_length ||
> + total_length != le16_to_cpu(certs->length)) {
> + dev_err(spdm_state->dev,
> + "Malformed certificate chain in slot %u\n", slot);
> + rc = -EPROTO;
> + goto err_free_certs;
> + }
> +
> + rc = spdm_validate_cert_chain(spdm_state, slot,
> + (u8 *)certs + header_length,
> + total_length - header_length);
> +
> +err_free_certs:
> + kvfree(certs);
> + kvfree(rsp);
> + return rc;
> +}
> +
> +#define SPDM_PREFIX_SZ 64 /* SPDM 1.2.0 margin no 803 */
> +#define SPDM_COMBINED_PREFIX_SZ 100 /* SPDM 1.2.0 margin no 806 */
> +
> +/**
> + * spdm_create_combined_prefix() - Create combined_spdm_prefix for a hash
> + *
> + * @spdm_state: SPDM session state
> + * @spdm_context: SPDM context
> + * @buf: Buffer to receive combined_spdm_prefix (100 bytes)
> + *
> + * From SPDM 1.2, a hash is prefixed with the SPDM version and context before
> + * a signature is generated (or verified) over the resulting concatenation
> + * (SPDM 1.2.0 section 15). Create that prefix.
> + */
> +static void spdm_create_combined_prefix(struct spdm_state *spdm_state,
> + const char *spdm_context, void *buf)
> +{
> + u8 minor = spdm_state->version & 0xf;
> + u8 major = spdm_state->version >> 4;

Name the fields with define and use FIELD_GET().

> + size_t len = strlen(spdm_context);
> + int rc, zero_pad;
> +
> + rc = snprintf(buf, SPDM_PREFIX_SZ + 1,
> + "dmtf-spdm-v%hhx.%hhx.*dmtf-spdm-v%hhx.%hhx.*"
> + "dmtf-spdm-v%hhx.%hhx.*dmtf-spdm-v%hhx.%hhx.*",

Why are these using s8 formatting specifier %hhx ??

> + major, minor, major, minor, major, minor, major, minor);
> + WARN_ON(rc != SPDM_PREFIX_SZ);
> +
> + zero_pad = SPDM_COMBINED_PREFIX_SZ - SPDM_PREFIX_SZ - 1 - len;
> + WARN_ON(zero_pad < 0);
> +
> + memset(buf + SPDM_PREFIX_SZ + 1, 0, zero_pad);
> + memcpy(buf + SPDM_PREFIX_SZ + 1 + zero_pad, spdm_context, len);
> +}
> +
> +/**
> + * spdm_verify_signature() - Verify signature against leaf key
> + *
> + * @spdm_state: SPDM session state
> + * @s: Signature
> + * @spdm_context: SPDM context (used to create combined_spdm_prefix)
> + *
> + * Implementation of the abstract SPDMSignatureVerify() function described in
> + * SPDM 1.2.0 section 16: Compute the hash in @spdm_state->desc and verify
> + * that its signature @s was generated with @spdm_state->leaf_key.
> + * Return 0 on success or a negative errno.
> + */
> +static int spdm_verify_signature(struct spdm_state *spdm_state, u8 *s,
> + const char *spdm_context)
> +{
> + struct public_key_signature sig = {
> + .s = s,
> + .s_size = spdm_state->s,
> + .encoding = spdm_state->base_asym_enc,
> + .hash_algo = spdm_state->base_hash_alg_name,
> + };
> + u8 *m, *mhash = NULL;
> + int rc;
> +
> + m = kmalloc(SPDM_COMBINED_PREFIX_SZ + spdm_state->h, GFP_KERNEL);
> + if (!m)
> + return -ENOMEM;
> +
> + rc = crypto_shash_final(spdm_state->desc, m + SPDM_COMBINED_PREFIX_SZ);
> + if (rc)
> + goto err_free_m;
> +
> + if (spdm_state->version <= 0x11) {
> + /*
> + * Until SPDM 1.1, the signature is computed only over the hash
> + * (SPDM 1.0.0 section 4.9.2.7).
> + */
> + sig.digest = m + SPDM_COMBINED_PREFIX_SZ;
> + sig.digest_size = spdm_state->h;
> + } else {
> + /*
> + * From SPDM 1.2, the hash is prefixed with spdm_context before
> + * computing the signature over the resulting message M
> + * (SPDM 1.2.0 margin no 841).
> + */
> + spdm_create_combined_prefix(spdm_state, spdm_context, m);
> +
> + /*
> + * RSA and ECDSA algorithms require that M is hashed once more.
> + * EdDSA and SM2 algorithms omit that step.
> + * The switch statement prepares for their introduction.
> + */
> + switch (spdm_state->base_asym_alg) {
> + default:
> + mhash = kmalloc(spdm_state->h, GFP_KERNEL);
> + if (!mhash) {
> + rc = -ENOMEM;
> + goto err_free_m;
> + }
> +
> + rc = crypto_shash_digest(spdm_state->desc, m,
> + SPDM_COMBINED_PREFIX_SZ + spdm_state->h,
> + mhash);
> + if (rc)
> + goto err_free_mhash;
> +
> + sig.digest = mhash;
> + sig.digest_size = spdm_state->h;
> + break;
> + }
> + }
> +
> + rc = public_key_verify_signature(spdm_state->leaf_key, &sig);
> +
> +err_free_mhash:
> + kfree(mhash);
> +err_free_m:
> + kfree(m);
> + return rc;
> +}
> +
> +/**
> + * spdm_challenge_rsp_sz() - Calculate CHALLENGE_AUTH response size
> + *
> + * @spdm_state: SPDM session state
> + * @rsp: CHALLENGE_AUTH response (optional)
> + *
> + * A CHALLENGE_AUTH response contains multiple variable-length fields
> + * as well as optional fields. This helper eases calculating its size.
> + *
> + * If @rsp is %NULL, assume the maximum OpaqueDataLength of 1024 bytes
> + * (SPDM 1.0.0 table 21). Otherwise read OpaqueDataLength from @rsp.
> + * OpaqueDataLength can only be > 0 for SPDM 1.0 and 1.1, as they lack
> + * the OtherParamsSupport field in the NEGOTIATE_ALGORITHMS request.
> + * For SPDM 1.2+, we do not offer any Opaque Data Formats in that field,
> + * which forces OpaqueDataLength to 0 (SPDM 1.2.0 margin no 261).
> + */
> +static size_t spdm_challenge_rsp_sz(struct spdm_state *spdm_state,
> + struct spdm_challenge_rsp *rsp)
> +{
> + size_t size = sizeof(*rsp) /* Header */

extra space between size_t and size.

> + + spdm_state->h /* CertChainHash */
> + + 32; /* Nonce */

Add SPDM_NONCE_SIZE ?

> +
> + if (rsp)
> + /* May be unaligned if hash algorithm has unusual length. */
> + size += get_unaligned_le16((u8 *)rsp + size);
> + else
> + size += SPDM_MAX_OPAQUE_DATA; /* OpaqueData */
> +
> + size += 2; /* OpaqueDataLength */
> +
> + if (spdm_state->version >= 0x13)
> + size += 8; /* RequesterContext */
> +
> + return size + spdm_state->s; /* Signature */

Remove the extra space.

> +}
> +
> +static int spdm_challenge(struct spdm_state *spdm_state, u8 slot)
> +{
> + size_t req_sz, rsp_sz, rsp_sz_max, sig_offset;
> + struct spdm_challenge_req req = {
> + .code = SPDM_CHALLENGE,
> + .param1 = slot,
> + .param2 = 0, /* no measurement summary hash */
> + };
> + struct spdm_challenge_rsp *rsp;
> + int rc, length;
> +
> + get_random_bytes(&req.nonce, sizeof(req.nonce));
> +
> + if (spdm_state->version <= 0x12)
> + req_sz = offsetof(typeof(req), context);
> + else
> + req_sz = sizeof(req);
> +
> + rsp_sz_max = spdm_challenge_rsp_sz(spdm_state, NULL);
> + rsp = kzalloc(rsp_sz_max, GFP_KERNEL);
> + if (!rsp)
> + return -ENOMEM;
> +
> + rc = spdm_exchange(spdm_state, &req, req_sz, rsp, rsp_sz_max);
> + if (rc < 0)
> + goto err_free_rsp;
> +
> + length = rc;
> + rsp_sz = spdm_challenge_rsp_sz(spdm_state, rsp);
> + if (length < rsp_sz) {
> + dev_err(spdm_state->dev, "Truncated challenge_auth response\n");
> + rc = -EIO;
> + goto err_free_rsp;
> + }
> +
> + /* Last step of building the hash */
> + rc = crypto_shash_update(spdm_state->desc, (u8 *)&req, req_sz);
> + if (rc)
> + goto err_free_rsp;
> +
> + sig_offset = rsp_sz - spdm_state->s;
> + rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp, sig_offset);
> + if (rc)
> + goto err_free_rsp;
> +
> + /* Hash is complete and signature received; verify against leaf key */
> + rc = spdm_verify_signature(spdm_state, (u8 *)rsp + sig_offset,
> + "responder-challenge_auth signing");
> + if (rc)
> + dev_err(spdm_state->dev,
> + "Failed to verify challenge_auth signature: %d\n", rc);
> +
> +err_free_rsp:
> + kfree(rsp);
> + return rc;
> +}
> +
> +static void spdm_reset(struct spdm_state *spdm_state)
> +{
> + public_key_free(spdm_state->leaf_key);
> + spdm_state->leaf_key = NULL;
> +
> + kfree(spdm_state->desc);
> + spdm_state->desc = NULL;
> +
> + crypto_free_shash(spdm_state->shash);
> + spdm_state->shash = NULL;
> +}
> +
> +/**
> + * spdm_authenticate() - Authenticate device
> + *
> + * @spdm_state: SPDM session state
> + *
> + * Authenticate a device through a sequence of GET_VERSION, GET_CAPABILITIES,
> + * NEGOTIATE_ALGORITHMS, GET_DIGESTS, GET_CERTIFICATE and CHALLENGE exchanges.
> + *
> + * Perform internal locking to serialize multiple concurrent invocations.
> + * Can be called repeatedly for reauthentication.
> + *
> + * Return 0 on success or a negative errno. In particular, -EPROTONOSUPPORT
> + * indicates that authentication is not supported by the device.
> + */
> +int spdm_authenticate(struct spdm_state *spdm_state)
> +{
> + size_t transcript_sz;
> + void *transcript;
> + int rc = -ENOMEM;
> + u8 slot;
> +
> + mutex_lock(&spdm_state->lock);
> + spdm_reset(spdm_state);
> +
> + /*
> + * For CHALLENGE_AUTH signature verification, a hash is computed over
> + * all exchanged messages to detect modification by a man-in-the-middle
> + * or media error. However the hash algorithm is not known until the
> + * NEGOTIATE_ALGORITHMS response has been received. The preceding
> + * GET_VERSION and GET_CAPABILITIES exchanges are therefore stashed
> + * in a transcript buffer and consumed once the algorithm is known.
> + * The buffer size is sufficient for the largest possible messages with
> + * 255 version entries and the capability fields added by SPDM 1.2.
> + */
> + transcript = kzalloc(struct_size_t(struct spdm_get_version_rsp,
> + version_number_entries, 255) +
> + sizeof(struct spdm_get_capabilities_reqrsp) * 2,
> + GFP_KERNEL);
> + if (!transcript)
> + goto unlock;
> +
> + rc = spdm_get_version(spdm_state, transcript, &transcript_sz);
> + if (rc)
> + goto unlock;
> +
> + rc = spdm_get_capabilities(spdm_state, transcript + transcript_sz,
> + &transcript_sz);
> + if (rc)
> + goto unlock;
> +
> + rc = spdm_negotiate_algs(spdm_state, transcript, transcript_sz);
> + if (rc)
> + goto unlock;
> +
> + rc = spdm_get_digests(spdm_state);
> + if (rc)
> + goto unlock;
> +
> + for_each_set_bit(slot, &spdm_state->slot_mask, SPDM_SLOTS) {
> + rc = spdm_get_certificate(spdm_state, slot);
> + if (rc == 0)
> + break; /* success */
> + if (rc != -ENOKEY && rc != -EKEYREJECTED)
> + break; /* try next slot only on signature error */
> + }
> + if (rc)
> + goto unlock;
> +
> + rc = spdm_challenge(spdm_state, slot);
> +
> +unlock:
> + if (rc)
> + spdm_reset(spdm_state);
> + spdm_state->authenticated = !rc;
> + mutex_unlock(&spdm_state->lock);
> + kfree(transcript);
> + return rc;
> +}
> +EXPORT_SYMBOL_GPL(spdm_authenticate);
> +
> +/**
> + * spdm_authenticated() - Whether device was authenticated successfully
> + *
> + * @spdm_state: SPDM session state
> + *
> + * Return true if the most recent spdm_authenticate() call was successful.
> + */
> +bool spdm_authenticated(struct spdm_state *spdm_state)
> +{
> + return spdm_state->authenticated;
> +}
> +EXPORT_SYMBOL_GPL(spdm_authenticated);
> +
> +/**
> + * spdm_create() - Allocate SPDM session
> + *
> + * @dev: Transport device
> + * @transport: Transport function to perform one message exchange
> + * @transport_priv: Transport private data
> + * @transport_sz: Maximum message size the transport is capable of (in bytes)
> + * @keyring: Trusted root certificates
> + *
> + * Returns a pointer to the allocated SPDM session state or NULL on error.
> + */
> +struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
> + void *transport_priv, u32 transport_sz,
> + struct key *keyring)
> +{
> + struct spdm_state *spdm_state = kzalloc(sizeof(*spdm_state), GFP_KERNEL);
> +
> + if (!spdm_state)
> + return NULL;
> +
> + spdm_state->dev = dev;
> + spdm_state->transport = transport;
> + spdm_state->transport_priv = transport_priv;
> + spdm_state->transport_sz = transport_sz;
> + spdm_state->root_keyring = keyring;
> +
> + mutex_init(&spdm_state->lock);
> +
> + return spdm_state;
> +}
> +EXPORT_SYMBOL_GPL(spdm_create);
> +
> +/**
> + * spdm_destroy() - Destroy SPDM session
> + *
> + * @spdm_state: SPDM session state
> + */
> +void spdm_destroy(struct spdm_state *spdm_state)
> +{
> + spdm_reset(spdm_state);
> + mutex_destroy(&spdm_state->lock);
> + kfree(spdm_state);
> +}
> +EXPORT_SYMBOL_GPL(spdm_destroy);
> +
> +MODULE_LICENSE("GPL");
>

--
i.

2023-10-03 14:39:56

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 07/12] spdm: Introduce library to authenticate devices

On Thu, 28 Sep 2023 19:32:37 +0200
Lukas Wunner <[email protected]> wrote:

> From: Jonathan Cameron <[email protected]>
>
> The Security Protocol and Data Model (SPDM) allows for authentication,
> measurement, key exchange and encrypted sessions with devices.
>
> A commonly used term for authentication and measurement is attestation.
>
> SPDM was conceived by the Distributed Management Task Force (DMTF).
> Its specification defines a request/response protocol spoken between
> host and attached devices over a variety of transports:
>
> https://www.dmtf.org/dsp/DSP0274
>
> This implementation supports SPDM 1.0 through 1.3 (the latest version).

I've no strong objection in allowing 1.0, but I think we do need
to control min version accepted somehow as I'm not that keen to get
security folk analyzing old version...

> It is designed to be transport-agnostic as the kernel already supports
> two different SPDM-capable transports:
>
> * PCIe Data Object Exchange (PCIe r6.1 sec 6.30, drivers/pci/doe.c)
> * Management Component Transport Protocol (MCTP,
> Documentation/networking/mctp.rst)

The MCTP side of things is going to be interesting because mostly you
need to jump through a bunch of hoops (address assignment, routing setup
etc) before you can actually talk to a device. That all involves
a userspace agent. So I'm not 100% sure how this will all turn out.
However still makes sense to have a transport agnostic implementation
as if nothing else it makes it easier to review as keeps us within
one specification.

>
> Use cases for SPDM include, but are not limited to:
>
> * PCIe Component Measurement and Authentication (PCIe r6.1 sec 6.31)
> * Compute Express Link (CXL r3.0 sec 14.11.6)
> * Open Compute Project (Attestation of System Components r1.0)
> https://www.opencompute.org/documents/attestation-v1-0-20201104-pdf

Alastair, would it make sense to also call out some of the storage
use cases you are interested in?

>
> The initial focus of this implementation is enabling PCIe CMA device
> authentication. As such, only a subset of the SPDM specification is
> contained herein, namely the request/response sequence GET_VERSION,
> GET_CAPABILITIES, NEGOTIATE_ALGORITHMS, GET_DIGESTS, GET_CERTIFICATE
> and CHALLENGE.
>
> A simple API is provided for subsystems wishing to authenticate devices:
> spdm_create(), spdm_authenticate() (can be called repeatedly for
> reauthentication) and spdm_destroy(). Certificates presented by devices
> are validated against an in-kernel keyring of trusted root certificates.
> A pointer to the keyring is passed to spdm_create().
>
> The set of supported cryptographic algorithms is limited to those
> declared mandatory in PCIe r6.1 sec 6.31.3. Adding more algorithms
> is straightforward as long as the crypto subsystem supports them.
>
> Future commits will extend this implementation with support for
> measurement, key exchange and encrypted sessions.
>
> So far, only the SPDM requester role is implemented. Care was taken to
> allow for effortless addition of the responder role at a later stage.
> This could be needed for a PCIe host bridge operating in endpoint mode.
> The responder role will be able to reuse struct definitions and helpers
> such as spdm_create_combined_prefix(). Those can be moved to
> spdm_common.{h,c} files upon introduction of the responder role.
> For now, all is kept in a single source file to avoid polluting the
> global namespace with unnecessary symbols.
>
> Credits: Jonathan wrote a proof-of-concept of this SPDM implementation.
> Lukas reworked it for upstream.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
Feels like a Co-developed Lukas ...
is appropriate use of that tag here as you've done quite
a lot of work on this.

I've forgotten most of this code. Hopefully I'll be more
able to spot bugs than if I remembered how it works :)
All comments ended up being fairly superficial stuff,
Looks good to me otherwise and anyway would be odd if I gave a
RB on 'my own patch ' :)

> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> MAINTAINERS | 9 +
> include/linux/spdm.h | 35 +
> lib/Kconfig | 15 +
> lib/Makefile | 2 +
> lib/spdm_requester.c | 1487 ++++++++++++++++++++++++++++++++++++++++++
> 5 files changed, 1548 insertions(+)
> create mode 100644 include/linux/spdm.h
> create mode 100644 lib/spdm_requester.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 90f13281d297..2591d2217d65 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -19299,6 +19299,15 @@ M: Security Officers <[email protected]>
> S: Supported
> F: Documentation/process/security-bugs.rst
>
> +SECURITY PROTOCOL AND DATA MODEL (SPDM)
> +M: Jonathan Cameron <[email protected]>
> +M: Lukas Wunner <[email protected]>
> +L: [email protected]
> +L: [email protected]
> +S: Maintained
> +F: include/linux/spdm.h
> +F: lib/spdm*
> +
> SECURITY SUBSYSTEM
> M: Paul Moore <[email protected]>
> M: James Morris <[email protected]>
> diff --git a/include/linux/spdm.h b/include/linux/spdm.h
> new file mode 100644
> index 000000000000..e824063793a7
> --- /dev/null
> +++ b/include/linux/spdm.h
> @@ -0,0 +1,35 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * DMTF Security Protocol and Data Model (SPDM)
> + * https://www.dmtf.org/dsp/DSP0274
> + *
> + * Copyright (C) 2021-22 Huawei
> + * Jonathan Cameron <[email protected]>
> + *
> + * Copyright (C) 2022-23 Intel Corporation
> + */
> +
> +#ifndef _SPDM_H_
> +#define _SPDM_H_
> +
> +#include <linux/types.h>
> +
> +struct key;
> +struct device;
> +struct spdm_state;
> +
> +typedef int (spdm_transport)(void *priv, struct device *dev,
> + const void *request, size_t request_sz,
> + void *response, size_t response_sz);
> +
> +struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
> + void *transport_priv, u32 transport_sz,
> + struct key *keyring);
> +
> +int spdm_authenticate(struct spdm_state *spdm_state);
> +
> +bool spdm_authenticated(struct spdm_state *spdm_state);
> +
> +void spdm_destroy(struct spdm_state *spdm_state);
> +
> +#endif
> diff --git a/lib/Kconfig b/lib/Kconfig
> index c686f4adc124..3516cf1dad16 100644
> --- a/lib/Kconfig
> +++ b/lib/Kconfig
> @@ -764,3 +764,18 @@ config ASN1_ENCODER
>
> config POLYNOMIAL
> tristate
> +
> +config SPDM_REQUESTER
> + tristate
> + select KEYS
> + select ASYMMETRIC_KEY_TYPE
> + select ASYMMETRIC_PUBLIC_KEY_SUBTYPE
> + select X509_CERTIFICATE_PARSER
> + help
> + The Security Protocol and Data Model (SPDM) allows for authentication,

This file is inconsistent but tab + 2 spaces seems more common for help
text. I don't mind though if you prefer this.

> + measurement, key exchange and encrypted sessions with devices. This
> + option enables support for the SPDM requester role.
> +
> + Crypto algorithms offered to SPDM responders are limited to those
> + enabled in .config. Drivers selecting SPDM_REQUESTER need to also
> + select any algorithms they deem mandatory.
> diff --git a/lib/Makefile b/lib/Makefile
> index 740109b6e2c8..d9ae58a9ca83 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -315,6 +315,8 @@ obj-$(CONFIG_PERCPU_TEST) += percpu_test.o
> obj-$(CONFIG_ASN1) += asn1_decoder.o
> obj-$(CONFIG_ASN1_ENCODER) += asn1_encoder.o
>
> +obj-$(CONFIG_SPDM_REQUESTER) += spdm_requester.o
> +
> obj-$(CONFIG_FONT_SUPPORT) += fonts/
>
> hostprogs := gen_crc32table
> diff --git a/lib/spdm_requester.c b/lib/spdm_requester.c
> new file mode 100644
> index 000000000000..407041036599
> --- /dev/null
> +++ b/lib/spdm_requester.c
> @@ -0,0 +1,1487 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * DMTF Security Protocol and Data Model (SPDM)
> + * https://www.dmtf.org/dsp/DSP0274
> + *
> + * Copyright (C) 2021-22 Huawei
> + * Jonathan Cameron <[email protected]>
> + *
> + * Copyright (C) 2022-23 Intel Corporation
> + */
> +
> +#define dev_fmt(fmt) "SPDM: " fmt
> +
> +#include <linux/dev_printk.h>
> +#include <linux/key.h>
> +#include <linux/module.h>
> +#include <linux/random.h>
> +#include <linux/spdm.h>
> +
> +#include <asm/unaligned.h>
> +#include <crypto/hash.h>
> +#include <crypto/public_key.h>
> +#include <keys/asymmetric-type.h>
> +#include <keys/x509-parser.h>
> +
> +/* SPDM versions supported by this implementation */
> +#define SPDM_MIN_VER 0x10
> +#define SPDM_MAX_VER 0x13
> +
Given how hard I fin the SPDM specifications to navigate
perhaps we should provide some breadcrumbs for reviewers?
/*
* SPDM 1.3.0
* Table 13 - Flag Fields definitions for the Requester
* Table 14 - Flag Fields definitions for the Responder
*/
> +#define SPDM_CACHE_CAP BIT(0) /* response only */
> +#define SPDM_CERT_CAP BIT(1)
> +#define SPDM_CHAL_CAP BIT(2)
> +#define SPDM_MEAS_CAP_MASK GENMASK(4, 3) /* response only */
> +#define SPDM_MEAS_CAP_NO 0 /* response only */
> +#define SPDM_MEAS_CAP_MEAS 1 /* response only */
> +#define SPDM_MEAS_CAP_MEAS_SIG 2 /* response only */
> +#define SPDM_MEAS_FRESH_CAP BIT(5) /* response only */

This is awkward but SPDM 1.01 has PSS_CAP in bits 6 and 7 of byte 1.
Looks good by time of 1.1.0

> +#define SPDM_ENCRYPT_CAP BIT(6)
> +#define SPDM_MAC_CAP BIT(7)
> +#define SPDM_MUT_AUTH_CAP BIT(8)
/* 1.1.0 */
> +#define SPDM_KEY_EX_CAP BIT(9)
/* 1.1.0 */
> +#define SPDM_PSK_CAP_MASK GENMASK(11, 10)
/* 1.1.0 */
> +#define SPDM_PSK_CAP_NO 0
> +#define SPDM_PSK_CAP_PSK 1
> +#define SPDM_PSK_CAP_PSK_CTX 2 /* response only */
> +#define SPDM_ENCAP_CAP BIT(12)
/* 1.1.0 */
> +#define SPDM_HBEAT_CAP BIT(13)
/* 1.1.0 */
> +#define SPDM_KEY_UPD_CAP BIT(14)
/* 1.1.0 */
> +#define SPDM_HANDSHAKE_ITC_CAP BIT(15)
/* 1.1.0 */
> +#define SPDM_PUB_KEY_ID_CAP BIT(16)
/* 1.1.0 */

> +#define SPDM_CHUNK_CAP BIT(17) /* 1.2 */
> +#define SPDM_ALIAS_CERT_CAP BIT(18) /* 1.2 response only */
> +#define SPDM_SET_CERT_CAP BIT(19) /* 1.2 response only */
> +#define SPDM_CSR_CAP BIT(20) /* 1.2 response only */
> +#define SPDM_CERT_INST_RESET_CAP BIT(21) /* 1.2 response only */
> +#define SPDM_EP_INFO_CAP_MASK GENMASK(23, 22) /* 1.3 */
> +#define SPDM_EP_INFO_CAP_NO 0 /* 1.3 */
> +#define SPDM_EP_INFO_CAP_RSP 1 /* 1.3 */
> +#define SPDM_EP_INFO_CAP_RSP_SIG 2 /* 1.3 */
> +#define SPDM_MEL_CAP BIT(24) /* 1.3 response only */
> +#define SPDM_EVENT_CAP BIT(25) /* 1.3 */
> +#define SPDM_MULTI_KEY_CAP_MASK GENMASK(27, 26) /* 1.3 */
> +#define SPDM_MULTI_KEY_CAP_NO 0 /* 1.3 */
> +#define SPDM_MULTI_KEY_CAP_ONLY 1 /* 1.3 */
> +#define SPDM_MULTI_KEY_CAP_SEL 2 /* 1.3 */
> +#define SPDM_GET_KEY_PAIR_INFO_CAP BIT(28) /* 1.3 response only */
> +#define SPDM_SET_KEY_PAIR_INFO_CAP BIT(29) /* 1.3 response only */
> +
> +/* SPDM capabilities supported by this implementation */
> +#define SPDM_CAPS (SPDM_CERT_CAP | SPDM_CHAL_CAP)
> +
> +/* SPDM capabilities required from responders */
> +#define SPDM_MIN_CAPS (SPDM_CERT_CAP | SPDM_CHAL_CAP)
> +
> +/*
> + * SPDM cryptographic timeout of this implementation:
> + * Assume calculations may take up to 1 sec on a busy machine, which equals
> + * roughly 1 << 20. That's within the limits mandated for responders by CMA
> + * (1 << 23 usec, PCIe r6.1 sec 6.31.3) and DOE (1 sec, PCIe r6.1 sec 6.30.2).
> + * Used in GET_CAPABILITIES exchange.
> + */
> +#define SPDM_CTEXPONENT 20
> +
> +#define SPDM_ASYM_RSASSA_2048 BIT(0)
> +#define SPDM_ASYM_RSAPSS_2048 BIT(1)
> +#define SPDM_ASYM_RSASSA_3072 BIT(2)
> +#define SPDM_ASYM_RSAPSS_3072 BIT(3)
> +#define SPDM_ASYM_ECDSA_ECC_NIST_P256 BIT(4)
> +#define SPDM_ASYM_RSASSA_4096 BIT(5)
> +#define SPDM_ASYM_RSAPSS_4096 BIT(6)
> +#define SPDM_ASYM_ECDSA_ECC_NIST_P384 BIT(7)
> +#define SPDM_ASYM_ECDSA_ECC_NIST_P521 BIT(8)
> +#define SPDM_ASYM_SM2_ECC_SM2_P256 BIT(9)
/* 1.2.0 */
> +#define SPDM_ASYM_EDDSA_ED25519 BIT(10)
/* 1.2.0 */
> +#define SPDM_ASYM_EDDSA_ED448 BIT(11)
/* 1.2.0 */
I have far too many versions of this spec open currently...

> +
> +#define SPDM_HASH_SHA_256 BIT(0)
> +#define SPDM_HASH_SHA_384 BIT(1)
> +#define SPDM_HASH_SHA_512 BIT(2)
> +#define SPDM_HASH_SHA3_256 BIT(3)
> +#define SPDM_HASH_SHA3_384 BIT(4)
> +#define SPDM_HASH_SHA3_512 BIT(5)
> +#define SPDM_HASH_SM3_256 BIT(6)
/* 1.2.0 */

> +
> +#if IS_ENABLED(CONFIG_CRYPTO_RSA)
> +#define SPDM_ASYM_RSA SPDM_ASYM_RSASSA_2048 | \
> + SPDM_ASYM_RSASSA_3072 | \
> + SPDM_ASYM_RSASSA_4096 |

I'm not keen on the trailing |

Maybe,

#else
#define SPDM_ASYM_RSA 0
#endif


> +#endif
> +
> +#if IS_ENABLED(CONFIG_CRYPTO_ECDSA)
> +#define SPDM_ASYM_ECDSA SPDM_ASYM_ECDSA_ECC_NIST_P256 | \
> + SPDM_ASYM_ECDSA_ECC_NIST_P384 |
> +#endif
> +
> +#if IS_ENABLED(CONFIG_CRYPTO_SHA256)
> +#define SPDM_HASH_SHA2_256 SPDM_HASH_SHA_256 |
> +#endif
> +
> +#if IS_ENABLED(CONFIG_CRYPTO_SHA512)
> +#define SPDM_HASH_SHA2_384_512 SPDM_HASH_SHA_384 | \
> + SPDM_HASH_SHA_512 |
> +#endif
> +
> +/* SPDM algorithms supported by this implementation */
> +#define SPDM_ASYM_ALGOS (SPDM_ASYM_RSA \
> + SPDM_ASYM_ECDSA 0)
Doesn't this give errors for not defined for these if the config options not set
above? I think we need #else for each of them.
> +
> +#define SPDM_HASH_ALGOS (SPDM_HASH_SHA2_256 \
> + SPDM_HASH_SHA2_384_512 0)
> +
...

> +#define SPDM_GET_CAPABILITIES 0xE1
> +#define SPDM_MIN_DATA_TRANSFER_SIZE 42 /* SPDM 1.2.0 margin no 226 */
> +
> +/* For this exchange the request and response messages have the same form */

Not before 1.1.0 they don't...

> +struct spdm_get_capabilities_reqrsp {
> + u8 version;
> + u8 code;
> + u8 param1;
> + u8 param2;
> + /* End of SPDM 1.0 structure */

True for request, but response is different (which breaks the comment above)
That means we should probably split this just for documentation purposes.
Or add more comments...

You have it right where it's used, so just a question of bringing comments
inline with that code



> +
> + u8 reserved1;
> + u8 ctexponent;
> + u16 reserved2;
> +
> + __le32 flags;
> + /* End of SPDM 1.1 structure */
> +
> + __le32 data_transfer_size; /* 1.2+ */
> + __le32 max_spdm_msg_size; /* 1.2+ */

There's potentially more for the 1.3 response...
Supported Algorithms seems to have been added of AlgSize if
param1 bit 1 is set.


> +} __packed;
> +
> +#define SPDM_NEGOTIATE_ALGS 0xE3
> +
> +struct spdm_negotiate_algs_req {
> + u8 version;
> + u8 code;
> + u8 param1; /* Number of ReqAlgStruct entries at end */
> + u8 param2;
> +
> + __le16 length;
> + u8 measurement_specification;
> + u8 other_params_support; /* 1.2+ */

Probably comment that it's reserved pre 1.2 rather than later elements
moving around. I guess some catch all text at the top of the file
to say that fields at the end that don't exist for earlier structures
mean shorter structures but fields in the middle replace reserved
space.

> +
> + __le32 base_asym_algo;
> + __le32 base_hash_algo;
> +
> + u8 reserved1[12];
> + u8 ext_asym_count;
> + u8 ext_hash_count;
> + u8 reserved2;
> + u8 mel_specification; /* 1.3+ */
> +
> + /*
> + * Additional optional fields at end of this structure:
> + * - ExtAsym: 4 bytes * ext_asym_count
> + * - ExtHash: 4 bytes * ext_hash_count
> + * - ReqAlgStruct: variable size * param1 * 1.1+ *
> + */
> +} __packed;

...

> +struct spdm_get_digests_rsp {
> + u8 version;
> + u8 code;
> + u8 param1; /* SupportedSlotMask */ /* 1.3+ */
> + u8 param2; /* ProvisionedSlotMask */
> + u8 digests[]; /* Hash of struct spdm_cert_chain for each slot */
> + /* End of SPDM 1.2 structure */

1.2 and earlier?

> +
> + /*
> + * Additional optional fields at end of this structure:
> + * (omitted as long as we do not advertise MULTI_KEY_CAP)
> + * - KeyPairID: 1 byte for each slot * 1.3+ *
> + * - CertificateInfo: 1 byte for each slot * 1.3+ *
> + * - KeyUsageMask: 2 bytes for each slot * 1.3+ *
> + */
> +} __packed;

...

> +struct spdm_get_certificate_rsp {
> + u8 version;
> + u8 code;
> + u8 param1; /* Slot number 0..7 */
> + u8 param2; /* CertModel */ /* 1.3+ */
Why CertModel? I'm seeing Cerificate Response Attributes which has
a field called CertificateInfo. Format of that is defined by CertModel
back in the digests request, but CertModel seems inappropriate here..

Mind you I've not read that bit of 1.3.0 yet so maybe this is appropriate
short hand.

> + __le16 portion_length;
> + __le16 remainder_length;
> + u8 cert_chain[]; /* PortionLength long */
> +} __packed;

...

> +#define SPDM_CHALLENGE 0x83
> +#define SPDM_MAX_OPAQUE_DATA 1024 /* SPDM 1.0.0 table 21 */
> +
> +struct spdm_challenge_req {
> + u8 version;
> + u8 code;
> + u8 param1; /* Slot number 0..7 */
> + u8 param2; /* MeasurementSummaryHash type */
> + u8 nonce[32];
> + /* End of SPDM 1.2 structure */

1.2 and earlier


> +
> + u8 context[8]; /* 1.3+ */
> +} __packed;
> +
> +struct spdm_challenge_rsp {
> + u8 version;
> + u8 code;
> + u8 param1; /* Slot number 0..7 */
> + u8 param2; /* Slot mask */
> + /*
> + * Additional fields at end of this structure:
> + * - CertChainHash: Hash of struct spdm_cert_chain for selected slot
> + * - Nonce: 32 bytes long
> + * - MeasurementSummaryHash: Optional hash of selected measurements
> + * - OpaqueDataLength: 2 bytes long
> + * - OpaqueData: Up to 1024 bytes long
> + * - RequesterContext: 8 bytes long * 1.3+ *

Perhaps call out that this is not a case of reserved field being filled in.
It moves the signature field. Which is different to other cases above
where prior to 1.3 there was a reserved field.


> + * - Signature
> + */
> +} __packed;
> +
> +#define SPDM_ERROR 0x7f
> +
> +enum spdm_error_code {
> + spdm_invalid_request = 0x01,
> + spdm_invalid_session = 0x02, /* 1.1 only */
> + spdm_busy = 0x03,
> + spdm_unexpected_request = 0x04,
> + spdm_unspecified = 0x05,
> + spdm_decrypt_error = 0x06,
/* 1.1+ */

> + spdm_unsupported_request = 0x07,
> + spdm_request_in_flight = 0x08,
/* 1.1+ */
> + spdm_invalid_response_code = 0x09,
/* 1.1+ */
> + spdm_session_limit_exceeded = 0x0a,
/* 1.1+ */

> + spdm_session_required = 0x0b,
/* 1.2+ */
> + spdm_reset_required = 0x0c,
/* 1.2+ */
> + spdm_response_too_large = 0x0d,
/* 1.2+ */
> + spdm_request_too_large = 0x0e,
/* 1.2+ */
> + spdm_large_response = 0x0f,
/* 1.2+ */
> + spdm_message_lost = 0x10,
/* 1.2+ */
> + spdm_invalid_policy = 0x11, /* 1.3+ */

> + spdm_version_mismatch = 0x41,
> + spdm_response_not_ready = 0x42,
> + spdm_request_resynch = 0x43,
> + spdm_operation_failed = 0x44, /* 1.3+ */
> + spdm_no_pending_requests = 0x45, /* 1.3+ */
> + spdm_vendor_defined_error = 0xff,
> +};
...


> +/**
> + * struct spdm_state - SPDM session state
> + *
> + * @lock: Serializes multiple concurrent spdm_authenticate() calls.
> + * @authenticated: Whether device was authenticated successfully.
> + * @dev: Transport device. Used for error reporting and passed to @transport.
> + * @transport: Transport function to perform one message exchange.
> + * @transport_priv: Transport private data.
> + * @transport_sz: Maximum message size the transport is capable of (in bytes).
> + * Used as DataTransferSize in GET_CAPABILITIES exchange.
> + * @version: Maximum common supported version of requester and responder.
> + * Negotiated during GET_VERSION exchange.
> + * @responder_caps: Cached capabilities of responder.
> + * Received during GET_CAPABILITIES exchange.
> + * @base_asym_alg: Asymmetric key algorithm for signature verification of
> + * CHALLENGE_AUTH messages.
> + * Selected by responder during NEGOTIATE_ALGORITHMS exchange.
> + * @base_hash_alg: Hash algorithm for signature verification of
> + * CHALLENGE_AUTH messages.
> + * Selected by responder during NEGOTIATE_ALGORITHMS exchange.
> + * @slot_mask: Bitmask of populated certificate slots in the responder.
> + * Received during GET_DIGESTS exchange.
> + * @base_asym_enc: Human-readable name of @base_asym_alg's signature encoding.
> + * Passed to crypto subsystem when calling verify_signature().
> + * @s: Signature length of @base_asym_alg (in bytes). S or SigLen in SPDM
> + * specification.
> + * @base_hash_alg_name: Human-readable name of @base_hash_alg.
> + * Passed to crypto subsystem when calling crypto_alloc_shash() and
> + * verify_signature().
> + * @shash: Synchronous hash handle for @base_hash_alg computation.
> + * @desc: Synchronous hash context for @base_hash_alg computation.
> + * @h: Hash length of @base_hash_alg (in bytes). H in SPDM specification.
> + * @leaf_key: Public key portion of leaf certificate against which to check
> + * responder's signatures.
> + * @root_keyring: Keyring against which to check the first certificate in
> + * responder's certificate chain.
> + */
> +struct spdm_state {
> + struct mutex lock;
> + unsigned int authenticated:1;
> +
> + /* Transport */
> + struct device *dev;
> + spdm_transport *transport;
> + void *transport_priv;
> + u32 transport_sz;
> +
> + /* Negotiated state */
> + u8 version;
> + u32 responder_caps;
> + u32 base_asym_alg;
> + u32 base_hash_alg;
> + unsigned long slot_mask;
> +
> + /* Signature algorithm */
> + const char *base_asym_enc;
> + size_t s;
> +
> + /* Hash algorithm */
> + const char *base_hash_alg_name;
> + struct crypto_shash *shash;
> + struct shash_desc *desc;
> + size_t h;
> +
> + /* Certificates */
> + struct public_key *leaf_key;
> + struct key *root_keyring;
> +};

...


> +
> +static const struct spdm_get_version_req spdm_get_version_req = {
> + .version = 0x10,
> + .code = SPDM_GET_VERSION,
> +};

...

> +static int spdm_get_capabilities(struct spdm_state *spdm_state,
> + struct spdm_get_capabilities_reqrsp *req,
> + size_t *reqrsp_sz)
> +{
> + struct spdm_get_capabilities_reqrsp *rsp;
> + size_t req_sz;
> + size_t rsp_sz;
> + int rc, length;
> +
> + req->code = SPDM_GET_CAPABILITIES;
> + req->ctexponent = SPDM_CTEXPONENT;
> + req->flags = cpu_to_le32(SPDM_CAPS);
> +
> + if (spdm_state->version == 0x10) {
> + req_sz = offsetof(typeof(*req), reserved1);

For all these, maybe offsetofend() would be easier to compare with the
specification than offsetof() field only defined in later spec?

> + rsp_sz = offsetof(typeof(*rsp), data_transfer_size);
> + } else if (spdm_state->version == 0x11) {
> + req_sz = offsetof(typeof(*req), data_transfer_size);
> + rsp_sz = offsetof(typeof(*rsp), data_transfer_size);
> + } else {
> + req_sz = sizeof(*req);
> + rsp_sz = sizeof(*rsp);
> + req->data_transfer_size = cpu_to_le32(spdm_state->transport_sz);
> + req->max_spdm_msg_size = cpu_to_le32(spdm_state->transport_sz);
> + }
> +
> + rsp = (void *)req + req_sz;

Add a comment on why we are doing this packing (I'd forgotten this mess with
building the cached version for hashing later).

> +
> + rc = spdm_exchange(spdm_state, req, req_sz, rsp, rsp_sz);
> + if (rc < 0)
> + return rc;
> +
> + length = rc;
> + if (length < rsp_sz) {
> + dev_err(spdm_state->dev, "Truncated capabilities response\n");
> + return -EIO;
> + }
> +
> + spdm_state->responder_caps = le32_to_cpu(rsp->flags);
> + if ((spdm_state->responder_caps & SPDM_MIN_CAPS) != SPDM_MIN_CAPS)
> + return -EPROTONOSUPPORT;
> +
> + if (spdm_state->version >= 0x12) {
> + u32 data_transfer_size = le32_to_cpu(rsp->data_transfer_size);
> + if (data_transfer_size < SPDM_MIN_DATA_TRANSFER_SIZE) {
> + dev_err(spdm_state->dev,
> + "Malformed capabilities response\n");
> + return -EPROTO;
> + }
> + spdm_state->transport_sz = min(spdm_state->transport_sz,
> + data_transfer_size);
> + }
> +
> + *reqrsp_sz += req_sz + rsp_sz;

This parameter isn't obvious either. I wonder if renaming it to transcript_sz
as per the parameter passed in is a better idea? Or do the addition part
externally from this function where we can see why it is happening?

> +
> + return 0;
> +}
> +
> +/**
> + * spdm_start_hash() - Build first part of CHALLENGE_AUTH hash
> + *
> + * @spdm_state: SPDM session state
> + * @transcript: GET_VERSION request and GET_CAPABILITIES request and response
> + * @transcript_sz: length of @transcript
> + * @req: NEGOTIATE_ALGORITHMS request
> + * @req_sz: length of @req
> + * @rsp: ALGORITHMS response
> + * @rsp_sz: length of @rsp
> + *
> + * We've just learned the hash algorithm to use for CHALLENGE_AUTH signature
> + * verification. Hash the GET_VERSION and GET_CAPABILITIES exchanges which
> + * have been stashed in @transcript, as well as the NEGOTIATE_ALGORITHMS

This isn't quite right. GET_VERSION reply is in the transcript, but the
request is const so done separately.

> + * exchange which has just been performed. Subsequent requests and responses
> + * will be added to the hash as they become available.
> + *
> + * Return 0 on success or a negative errno.
> + */
> +static int spdm_start_hash(struct spdm_state *spdm_state,
> + void *transcript, size_t transcript_sz,
> + void *req, size_t req_sz, void *rsp, size_t rsp_sz)
> +{
> + int rc;
> +
> + spdm_state->shash = crypto_alloc_shash(spdm_state->base_hash_alg_name,
> + 0, 0);
> + if (!spdm_state->shash)
> + return -ENOMEM;
> +
> + spdm_state->desc = kzalloc(sizeof(*spdm_state->desc) +
> + crypto_shash_descsize(spdm_state->shash),
> + GFP_KERNEL);
> + if (!spdm_state->desc)
> + return -ENOMEM;
> +
> + spdm_state->desc->tfm = spdm_state->shash;
> +
> + /* Used frequently to compute offsets, so cache H */
> + spdm_state->h = crypto_shash_digestsize(spdm_state->shash);
> +
> + rc = crypto_shash_init(spdm_state->desc);
> + if (rc)
> + return rc;
> +
> + rc = crypto_shash_update(spdm_state->desc,
> + (u8 *)&spdm_get_version_req,
> + sizeof(spdm_get_version_req));
> + if (rc)
> + return rc;
> +
> + rc = crypto_shash_update(spdm_state->desc,
> + (u8 *)transcript, transcript_sz);
> + if (rc)
> + return rc;
> +
> + rc = crypto_shash_update(spdm_state->desc, (u8 *)req, req_sz);
> + if (rc)
> + return rc;
> +
> + rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp, rsp_sz);
> +
> + return rc;

return crypto_...

> +}



> +static int spdm_negotiate_algs(struct spdm_state *spdm_state,
> + void *transcript, size_t transcript_sz)
> +{
> + struct spdm_req_alg_struct *req_alg_struct;
> + struct spdm_negotiate_algs_req *req;
> + struct spdm_negotiate_algs_rsp *rsp;
> + size_t req_sz = sizeof(*req);
> + size_t rsp_sz = sizeof(*rsp);
> + int rc, length;
> +
> + /* Request length shall be <= 128 bytes (SPDM 1.1.0 margin no 185) */
> + BUILD_BUG_ON(req_sz > 128);
> +
> + req = kzalloc(req_sz, GFP_KERNEL);

Maybe cleanup.h magic? Seems like it would simplify error paths here a
tiny bit. Various other cases follow, but I won't mention this every time.

> + if (!req)
> + return -ENOMEM;
> +
> + req->code = SPDM_NEGOTIATE_ALGS;
> + req->length = cpu_to_le16(req_sz);
> + req->base_asym_algo = cpu_to_le32(SPDM_ASYM_ALGOS);
> + req->base_hash_algo = cpu_to_le32(SPDM_HASH_ALGOS);
> +
> + rsp = kzalloc(rsp_sz, GFP_KERNEL);
> + if (!rsp) {
> + rc = -ENOMEM;
> + goto err_free_req;
> + }
> +
> + rc = spdm_exchange(spdm_state, req, req_sz, rsp, rsp_sz);
> + if (rc < 0)
> + goto err_free_rsp;
> +
> + length = rc;
> + if (length < sizeof(*rsp) ||
> + length < sizeof(*rsp) + rsp->param1 * sizeof(*req_alg_struct)) {
> + dev_err(spdm_state->dev, "Truncated algorithms response\n");
> + rc = -EIO;
> + goto err_free_rsp;
> + }
> +
> + spdm_state->base_asym_alg =
> + le32_to_cpu(rsp->base_asym_sel) & SPDM_ASYM_ALGOS;
> + spdm_state->base_hash_alg =
> + le32_to_cpu(rsp->base_hash_sel) & SPDM_HASH_ALGOS;

Isn't it a bug if the responder gives us more options than we asked about?
If that happens we should scream about it.

> +
> + /* Responder shall select exactly 1 alg (SPDM 1.0.0 table 14) */
> + if (hweight32(spdm_state->base_asym_alg) != 1 ||
> + hweight32(spdm_state->base_hash_alg) != 1 ||
> + rsp->ext_asym_sel_count != 0 ||
> + rsp->ext_hash_sel_count != 0 ||
> + rsp->param1 > req->param1) {
> + dev_err(spdm_state->dev, "Malformed algorithms response\n");
> + rc = -EPROTO;
> + goto err_free_rsp;
> + }
> +
> + rc = spdm_parse_algs(spdm_state);
> + if (rc)
> + goto err_free_rsp;
> +
> + /*
> + * If request contained a ReqAlgStruct not supported by responder,
> + * the corresponding RespAlgStruct may be omitted in response.
> + * Calculate the actual (possibly shorter) response length:
> + */
> + rsp_sz = sizeof(*rsp) + rsp->param1 * sizeof(*req_alg_struct);
> +
> + rc = spdm_start_hash(spdm_state, transcript, transcript_sz,
> + req, req_sz, rsp, rsp_sz);
> +
> +err_free_rsp:
> + kfree(rsp);
> +err_free_req:
> + kfree(req);
> +
> + return rc;
> +}
> +
...

> +static int spdm_validate_cert_chain(struct spdm_state *spdm_state, u8 slot,
> + u8 *certs, size_t total_length)
> +{
> + struct x509_certificate *cert, *prev = NULL;
> + bool is_leaf_cert;
> + size_t offset = 0;
> + struct key *key;
> + int rc, length;
> +
> + while (offset < total_length) {
> + rc = x509_get_certificate_length(certs + offset,
> + total_length - offset);
> + if (rc < 0) {
> + dev_err(spdm_state->dev, "Invalid certificate length "
> + "at slot %u offset %zu\n", slot, offset);
> + goto err_free_prev;

If we exit here, prev == cert and double free occurs I think?

> + }
> +
> + length = rc;
> + is_leaf_cert = offset + length == total_length;
> +
> + cert = x509_cert_parse(certs + offset, length);
> + if (IS_ERR(cert)) {
> + rc = PTR_ERR(cert);
> + dev_err(spdm_state->dev, "Certificate parse error %d "
> + "at slot %u offset %zu\n", rc, slot, offset);
> + goto err_free_prev;
> + }
> + if ((is_leaf_cert ==
> + test_bit(KEY_EFLAG_CA, &cert->pub->key_eflags)) ||
> + (is_leaf_cert &&
> + !test_bit(KEY_EFLAG_DIGITALSIG, &cert->pub->key_eflags))) {

I'd like a comment on these two conditions, or expand the error message
to make it clear why these options are valid.

> + rc = -EKEYREJECTED;
> + dev_err(spdm_state->dev, "Malformed certificate "
> + "at slot %u offset %zu\n", slot, offset);
> + goto err_free_cert;
> + }
> + if (cert->unsupported_sig) {
> + rc = -EKEYREJECTED;
> + dev_err(spdm_state->dev, "Unsupported signature "
> + "at slot %u offset %zu\n", slot, offset);
> + goto err_free_cert;
> + }
> + if (cert->blacklisted) {
> + rc = -EKEYREJECTED;
> + goto err_free_cert;
> + }
> +
> + if (!prev) {
> + /* First cert in chain, check against root_keyring */
> + key = find_asymmetric_key(spdm_state->root_keyring,
> + cert->sig->auth_ids[0],
> + cert->sig->auth_ids[1],
> + cert->sig->auth_ids[2],
> + false);
> + if (IS_ERR(key)) {
> + dev_info(spdm_state->dev, "Root certificate "
> + "for slot %u not found in %s "
> + "keyring: %s\n", slot,
> + spdm_state->root_keyring->description,
> + cert->issuer);
> + rc = PTR_ERR(key);
> + goto err_free_cert;
> + }
> +
> + rc = verify_signature(key, cert->sig);
> + key_put(key);
> + } else {
> + /* Subsequent cert in chain, check against previous */
> + rc = public_key_verify_signature(prev->pub, cert->sig);
> + }
> +
> + if (rc) {
> + dev_err(spdm_state->dev, "Signature validation error "
> + "%d at slot %u offset %zu\n", rc, slot, offset);
> + goto err_free_cert;
> + }
> +
> + x509_free_certificate(prev);

Even this could be done with the cleanup.h stuff with appropriate
pointer stealing and hence allow direct returns.

This is the sort of case that I think really justifies that stuff.

> + offset += length;
> + prev = cert;

As above, I think you need to set
cert = NULL; here to avoid a double free then deal
with prev, not cert in the good path.

> + }
> +
> + prev = NULL;
> + spdm_state->leaf_key = cert->pub;
> + cert->pub = NULL;
> +
> +err_free_cert:
> + x509_free_certificate(cert);
> +err_free_prev:
> + x509_free_certificate(prev);
> + return rc;
> +}
> +
> +static int spdm_get_certificate(struct spdm_state *spdm_state, u8 slot)
> +{
> + struct spdm_get_certificate_req req = {
> + .code = SPDM_GET_CERTIFICATE,
> + .param1 = slot,
> + };
> + struct spdm_get_certificate_rsp *rsp;
> + struct spdm_cert_chain *certs = NULL;
> + size_t rsp_sz, total_length, header_length;
> + u16 remainder_length = 0xffff;
> + u16 portion_length;
> + u16 offset = 0;
> + int rc, length;
> +
> + /*
> + * It is legal for the responder to send more bytes than requested.
> + * (Note the "should" in SPDM 1.0.0 table 19.) If we allocate a
> + * too small buffer, we can't calculate the hash over the (truncated)
> + * response. Only choice is thus to allocate the maximum possible 64k.
> + */

Yikes. An alternative is just reject any device that does this until we get
a report of a device in the wild that does it.

> + rsp_sz = min_t(u32, sizeof(*rsp) + 0xffff, spdm_state->transport_sz);
> + rsp = kvmalloc(rsp_sz, GFP_KERNEL);
> + if (!rsp)
> + return -ENOMEM;
...



> +
> +/**
> + * spdm_verify_signature() - Verify signature against leaf key
> + *
> + * @spdm_state: SPDM session state
> + * @s: Signature
> + * @spdm_context: SPDM context (used to create combined_spdm_prefix)
> + *
> + * Implementation of the abstract SPDMSignatureVerify() function described in
> + * SPDM 1.2.0 section 16: Compute the hash in @spdm_state->desc and verify
> + * that its signature @s was generated with @spdm_state->leaf_key.
> + * Return 0 on success or a negative errno.
> + */
> +static int spdm_verify_signature(struct spdm_state *spdm_state, u8 *s,
> + const char *spdm_context)
> +{
> + struct public_key_signature sig = {
> + .s = s,
> + .s_size = spdm_state->s,
> + .encoding = spdm_state->base_asym_enc,
> + .hash_algo = spdm_state->base_hash_alg_name,
> + };
> + u8 *m, *mhash = NULL;
> + int rc;
> +
> + m = kmalloc(SPDM_COMBINED_PREFIX_SZ + spdm_state->h, GFP_KERNEL);
> + if (!m)
> + return -ENOMEM;
> +
> + rc = crypto_shash_final(spdm_state->desc, m + SPDM_COMBINED_PREFIX_SZ);
> + if (rc)
> + goto err_free_m;
> +
> + if (spdm_state->version <= 0x11) {
> + /*
> + * Until SPDM 1.1, the signature is computed only over the hash
For SPDM 1.1 and earlier
(Until isn't necessarily inclusive).

> + * (SPDM 1.0.0 section 4.9.2.7).
> + */
> + sig.digest = m + SPDM_COMBINED_PREFIX_SZ;
> + sig.digest_size = spdm_state->h;
> + } else {
> + /*
> + * From SPDM 1.2, the hash is prefixed with spdm_context before
> + * computing the signature over the resulting message M
> + * (SPDM 1.2.0 margin no 841).
> + */
> + spdm_create_combined_prefix(spdm_state, spdm_context, m);
> +
> + /*
> + * RSA and ECDSA algorithms require that M is hashed once more.
> + * EdDSA and SM2 algorithms omit that step.
> + * The switch statement prepares for their introduction.
> + */
> + switch (spdm_state->base_asym_alg) {
> + default:
> + mhash = kmalloc(spdm_state->h, GFP_KERNEL);
> + if (!mhash) {
> + rc = -ENOMEM;
> + goto err_free_m;
> + }
> +
> + rc = crypto_shash_digest(spdm_state->desc, m,
> + SPDM_COMBINED_PREFIX_SZ + spdm_state->h,
> + mhash);
> + if (rc)
> + goto err_free_mhash;
> +
> + sig.digest = mhash;
> + sig.digest_size = spdm_state->h;
> + break;
> + }
> + }
> +
> + rc = public_key_verify_signature(spdm_state->leaf_key, &sig);
> +
> +err_free_mhash:
> + kfree(mhash);
> +err_free_m:
> + kfree(m);
> + return rc;
> +}
> +
> +/**
> + * spdm_challenge_rsp_sz() - Calculate CHALLENGE_AUTH response size
> + *
> + * @spdm_state: SPDM session state
> + * @rsp: CHALLENGE_AUTH response (optional)
> + *
> + * A CHALLENGE_AUTH response contains multiple variable-length fields
> + * as well as optional fields. This helper eases calculating its size.
> + *
> + * If @rsp is %NULL, assume the maximum OpaqueDataLength of 1024 bytes
> + * (SPDM 1.0.0 table 21). Otherwise read OpaqueDataLength from @rsp.
> + * OpaqueDataLength can only be > 0 for SPDM 1.0 and 1.1, as they lack
> + * the OtherParamsSupport field in the NEGOTIATE_ALGORITHMS request.
> + * For SPDM 1.2+, we do not offer any Opaque Data Formats in that field,
> + * which forces OpaqueDataLength to 0 (SPDM 1.2.0 margin no 261).
> + */
> +static size_t spdm_challenge_rsp_sz(struct spdm_state *spdm_state,
> + struct spdm_challenge_rsp *rsp)
> +{
> + size_t size = sizeof(*rsp) /* Header */

Double spaces look a bit strange...

> + + spdm_state->h /* CertChainHash */
> + + 32; /* Nonce */
> +
> + if (rsp)
> + /* May be unaligned if hash algorithm has unusual length. */
> + size += get_unaligned_le16((u8 *)rsp + size);
> + else
> + size += SPDM_MAX_OPAQUE_DATA; /* OpaqueData */
> +
> + size += 2; /* OpaqueDataLength */
> +
> + if (spdm_state->version >= 0x13)
> + size += 8; /* RequesterContext */
> +
> + return size + spdm_state->s; /* Signature */

Double space here as well looks odd to me.

> +}



> +
> +/**
> + * spdm_authenticate() - Authenticate device
> + *
> + * @spdm_state: SPDM session state
> + *
> + * Authenticate a device through a sequence of GET_VERSION, GET_CAPABILITIES,
> + * NEGOTIATE_ALGORITHMS, GET_DIGESTS, GET_CERTIFICATE and CHALLENGE exchanges.
> + *
> + * Perform internal locking to serialize multiple concurrent invocations.
> + * Can be called repeatedly for reauthentication.
> + *
> + * Return 0 on success or a negative errno. In particular, -EPROTONOSUPPORT
> + * indicates that authentication is not supported by the device.
> + */
> +int spdm_authenticate(struct spdm_state *spdm_state)
> +{
> + size_t transcript_sz;
> + void *transcript;
> + int rc = -ENOMEM;
> + u8 slot;
> +
> + mutex_lock(&spdm_state->lock);

You could use
guard(mutex)(&spdm_state->lock);
but if you prefer not that's fine by me as there are disadvantages in
readability perhaps.

Will still need the gotos though to do the rest if appropriate.

> + spdm_reset(spdm_state);
> +
> + /*
> + * For CHALLENGE_AUTH signature verification, a hash is computed over
> + * all exchanged messages to detect modification by a man-in-the-middle
> + * or media error. However the hash algorithm is not known until the
> + * NEGOTIATE_ALGORITHMS response has been received. The preceding
> + * GET_VERSION and GET_CAPABILITIES exchanges are therefore stashed
> + * in a transcript buffer and consumed once the algorithm is known.
> + * The buffer size is sufficient for the largest possible messages with
> + * 255 version entries and the capability fields added by SPDM 1.2.
> + */
> + transcript = kzalloc(struct_size_t(struct spdm_get_version_rsp,
> + version_number_entries, 255) +
> + sizeof(struct spdm_get_capabilities_reqrsp) * 2,
> + GFP_KERNEL);
> + if (!transcript)
> + goto unlock;
this doesn't need to reset, so perhaps another label appropriate?

> +
> + rc = spdm_get_version(spdm_state, transcript, &transcript_sz);
> + if (rc)
> + goto unlock;
> +
> + rc = spdm_get_capabilities(spdm_state, transcript + transcript_sz,
> + &transcript_sz);
> + if (rc)
> + goto unlock;
> +
> + rc = spdm_negotiate_algs(spdm_state, transcript, transcript_sz);
> + if (rc)
> + goto unlock;
> +
> + rc = spdm_get_digests(spdm_state);
> + if (rc)
> + goto unlock;
> +
> + for_each_set_bit(slot, &spdm_state->slot_mask, SPDM_SLOTS) {
> + rc = spdm_get_certificate(spdm_state, slot);
> + if (rc == 0)
> + break; /* success */
> + if (rc != -ENOKEY && rc != -EKEYREJECTED)
> + break; /* try next slot only on signature error */
> + }
> + if (rc)
> + goto unlock;
> +
> + rc = spdm_challenge(spdm_state, slot);
> +
> +unlock:
> + if (rc)
> + spdm_reset(spdm_state);

I'd expect reset to also clear authenticated. Seems odd to do it separately
and relies on reset only being called here. If that were the case and you
were handling locking and freeing using cleanup.h magic, then

rc = spdm_challenge(spdm_state);
if (rc)
goto reset;
return 0;

reset:
spdm_reset(spdm_state);

> + spdm_state->authenticated = !rc;
> + mutex_unlock(&spdm_state->lock);
> + kfree(transcript);

Ordering seems strange as transcript was allocated under the lock
but freed outside it.

> + return rc;
> +}
> +EXPORT_SYMBOL_GPL(spdm_authenticate);

...

> +/**
> + * spdm_create() - Allocate SPDM session
> + *
> + * @dev: Transport device
> + * @transport: Transport function to perform one message exchange
> + * @transport_priv: Transport private data
> + * @transport_sz: Maximum message size the transport is capable of (in bytes)
> + * @keyring: Trusted root certificates
> + *
> + * Returns a pointer to the allocated SPDM session state or NULL on error.
> + */
> +struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
> + void *transport_priv, u32 transport_sz,
> + struct key *keyring)
> +{
> + struct spdm_state *spdm_state = kzalloc(sizeof(*spdm_state), GFP_KERNEL);
> +
> + if (!spdm_state)
> + return NULL;
> +
> + spdm_state->dev = dev;
> + spdm_state->transport = transport;
> + spdm_state->transport_priv = transport_priv;
> + spdm_state->transport_sz = transport_sz;
> + spdm_state->root_keyring = keyring;
> +
> + mutex_init(&spdm_state->lock);
> +
> + return spdm_state;
> +}
> +EXPORT_SYMBOL_GPL(spdm_create);

Makes sense to namespace these?

> +
> +/**
> + * spdm_destroy() - Destroy SPDM session
> + *
> + * @spdm_state: SPDM session state
> + */
> +void spdm_destroy(struct spdm_state *spdm_state)
> +{
> + spdm_reset(spdm_state);
> + mutex_destroy(&spdm_state->lock);
> + kfree(spdm_state);
> +}
> +EXPORT_SYMBOL_GPL(spdm_destroy);
> +
> +MODULE_LICENSE("GPL");

2023-10-03 14:48:03

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 08/12] PCI/CMA: Authenticate devices on enumeration

On Thu, 28 Sep 2023 19:32:38 +0200
Lukas Wunner <[email protected]> wrote:

> From: Jonathan Cameron <[email protected]>
>
> Component Measurement and Authentication (CMA, PCIe r6.1 sec 6.31)
> allows for measurement and authentication of PCIe devices. It is
> based on the Security Protocol and Data Model specification (SPDM,
> https://www.dmtf.org/dsp/DSP0274).
>
> CMA-SPDM in turn forms the basis for Integrity and Data Encryption
> (IDE, PCIe r6.1 sec 6.33) because the key material used by IDE is
> exchanged over a CMA-SPDM session.
>
> As a first step, authenticate CMA-capable devices on enumeration.
> A subsequent commit will expose the result in sysfs.
>
> When allocating SPDM session state with spdm_create(), the maximum SPDM
> message length needs to be passed. Make the PCI_DOE_MAX_LENGTH macro
> public and calculate the maximum payload length from it.
>
> Credits: Jonathan wrote a proof-of-concept of this CMA implementation.
> Lukas reworked it for upstream. Wilfred contributed fixes for issues
> discovered during testing.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
> Signed-off-by: Wilfred Mallawa <[email protected]>
> Signed-off-by: Lukas Wunner <[email protected]>
Hi Lukas,

A few things inline. Biggest of which is making this one build
warning free by pulling forward the cma_capable flag from
patch 10.

>
> diff --git a/drivers/pci/cma.c b/drivers/pci/cma.c
> new file mode 100644
> index 000000000000..06e5846325e3
> --- /dev/null
> +++ b/drivers/pci/cma.c


> +void pci_cma_init(struct pci_dev *pdev)
> +{
> + struct pci_doe_mb *doe;
> + int rc;
> +
> + if (!pci_cma_keyring) {
> + return;
> + }
> +
> + if (!pci_is_pcie(pdev))
> + return;
> +
> + doe = pci_find_doe_mailbox(pdev, PCI_VENDOR_ID_PCI_SIG,
> + PCI_DOE_PROTOCOL_CMA);
> + if (!doe)
> + return;
> +
> + pdev->spdm_state = spdm_create(&pdev->dev, pci_doe_transport, doe,
> + PCI_DOE_MAX_PAYLOAD, pci_cma_keyring);
> + if (!pdev->spdm_state) {
> + return;
> + }

Brackets not needed.

> +
> + rc = spdm_authenticate(pdev->spdm_state);

Hanging rc? There is a blob in patch 10 that uses it, but odd to keep it around
in meantime. Perhaps just add the flag in this patch and set it even
though no one cares about it yet.


> +}
> +
> +void pci_cma_destroy(struct pci_dev *pdev)
> +{
> + if (pdev->spdm_state)
> + spdm_destroy(pdev->spdm_state);
> +}
> +
> +__init static int pci_cma_keyring_init(void)
> +{
> + pci_cma_keyring = keyring_alloc(".cma", KUIDT_INIT(0), KGIDT_INIT(0),
> + current_cred(),
> + (KEY_POS_ALL & ~KEY_POS_SETATTR) |
> + KEY_USR_VIEW | KEY_USR_READ |
> + KEY_USR_WRITE | KEY_USR_SEARCH,
> + KEY_ALLOC_NOT_IN_QUOTA |
> + KEY_ALLOC_SET_KEEP, NULL, NULL);
> + if (IS_ERR(pci_cma_keyring)) {
> + pr_err("Could not allocate keyring\n");
> + return PTR_ERR(pci_cma_keyring);
> + }
> +
> + return 0;
> +}
> +arch_initcall(pci_cma_keyring_init);


2023-10-03 15:05:22

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 09/12] PCI/CMA: Validate Subject Alternative Name in certificates

On Thu, 28 Sep 2023 19:32:39 +0200
Lukas Wunner <[email protected]> wrote:

> PCIe r6.1 sec 6.31.3 stipulates requirements for X.509 Leaf Certificates
> presented by devices, in particular the presence of a Subject Alternative
> Name extension with a name that encodes the Vendor ID, Device ID, Device
> Serial Number, etc.

Lets you do any of
* What you have here
* Reference Integrity Manifest, e.g. see Trusted Computing Group
* A pointer to a location where such a Reference Integrity Manifest can be
obtained.

So this text feels a little strong though I'm fine with only support the
Subject Alternative Name bit for now. Whoever has one of the other options
can add that support :)

>
> This prevents a mismatch between the device identity in Config Space and
> the certificate. A device cannot misappropriate a certificate from a
> different device without also spoofing Config Space. As a corollary,
> it cannot dupe an arbitrary driver into binding to it. (Only those
> which bind to the device identity in the Subject Alternative Name work.)
>
> Parse the Subject Alternative Name using a small ASN.1 module and
> validate its contents. The theory of operation is explained in a code
> comment at the top of the newly added cma-x509.c.
>
> This functionality is introduced in a separate commit on top of basic
> CMA-SPDM support to split the code into digestible, reviewable chunks.
>
> The CMA OID added here is taken from the official OID Repository
> (it's not documented in the PCIe Base Spec):
> https://oid-rep.orange-labs.fr/get/2.23.147
>
> Signed-off-by: Lukas Wunner <[email protected]>

I haven't looked asn.1 recently enough to have any confidence on
a review of that bit...
So, for everything except the asn.1
Reviewed-by: Jonathan Cameron <[email protected]>


2023-10-03 15:11:11

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 10/12] PCI/CMA: Reauthenticate devices on reset and resume

On Thu, 28 Sep 2023 19:32:40 +0200
Lukas Wunner <[email protected]> wrote:

> CMA-SPDM state is lost when a device undergoes a Conventional Reset.
> (But not a Function Level Reset, PCIe r6.1 sec 6.6.2.) A D3cold to D0
> transition implies a Conventional Reset (PCIe r6.1 sec 5.8).
>
> Thus, reauthenticate devices on resume from D3cold and on recovery from
> a Secondary Bus Reset or DPC-induced Hot Reset.
>
> The requirement to reauthenticate devices on resume from system sleep
> (and in the future reestablish IDE encryption) is the reason why SPDM
> needs to be in-kernel: During ->resume_noirq, which is the first phase
> after system sleep, the PCI core walks down the hierarchy, puts each
> device in D0, restores its config space and invokes the driver's
> ->resume_noirq callback. The driver is afforded the right to access the
> device already during this phase.
>
> To retain this usage model in the face of authentication and encryption,
> CMA-SPDM reauthentication and IDE reestablishment must happen during the
> ->resume_noirq phase, before the driver's first access to the device.
> The driver is thus afforded seamless authenticated and encrypted access
> until the last moment before suspend and from the first moment after
> resume.
>
> During the ->resume_noirq phase, device interrupts are not yet enabled.
> It is thus impossible to defer CMA-SPDM reauthentication to a user space
> component on an attached disk or on the network, making an in-kernel
> SPDM implementation mandatory.
>
> The same catch-22 exists on recovery from a Conventional Reset: A user
> space SPDM implementation might live on a device which underwent reset,
> rendering its execution impossible.
>
> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> drivers/pci/cma.c | 10 ++++++++++
> drivers/pci/pci-driver.c | 1 +
> drivers/pci/pci.c | 12 ++++++++++--
> drivers/pci/pci.h | 5 +++++
> drivers/pci/pcie/err.c | 3 +++
> include/linux/pci.h | 1 +
> 6 files changed, 30 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/pci/cma.c b/drivers/pci/cma.c
> index 012190c54ab6..89d23fdc37ec 100644
> --- a/drivers/pci/cma.c
> +++ b/drivers/pci/cma.c
> @@ -71,6 +71,16 @@ void pci_cma_init(struct pci_dev *pdev)
> }
>
> rc = spdm_authenticate(pdev->spdm_state);
> + if (rc != -EPROTONOSUPPORT)
> + pdev->cma_capable = true;
This is the blob that I think wants pulling forwards
to earlier patch so that rc =
isn't left hanging.

> +}
> +
> +int pci_cma_reauthenticate(struct pci_dev *pdev)
> +{
> + if (!pdev->cma_capable)
> + return -ENOTTY;
> +
> + return spdm_authenticate(pdev->spdm_state);

If authenticate failed why did we leave spdm_state around?
That feels like a corner case in the earlier patch that needs
documentation. I can see maybe certs not provisioned yet would
be a valid reason or an intermittent fault (solved by reset)
but in those cases we'd want to try again on reset anyway...

> }
>
> void pci_cma_destroy(struct pci_dev *pdev)
> diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
> index a79c110c7e51..b5d47eefe8df 100644
> --- a/drivers/pci/pci-driver.c
> +++ b/drivers/pci/pci-driver.c
> @@ -568,6 +568,7 @@ static void pci_pm_default_resume_early(struct pci_dev *pci_dev)
> pci_pm_power_up_and_verify_state(pci_dev);
> pci_restore_state(pci_dev);
> pci_pme_restore(pci_dev);
> + pci_cma_reauthenticate(pci_dev);
> }
>
> static void pci_pm_bridge_power_up_actions(struct pci_dev *pci_dev)
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 59c01d68c6d5..0f36e6082579 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -5248,8 +5248,16 @@ static int pci_reset_bus_function(struct pci_dev *dev, bool probe)
>
> rc = pci_dev_reset_slot_function(dev, probe);
> if (rc != -ENOTTY)
> - return rc;
> - return pci_parent_bus_reset(dev, probe);
> + goto done;
> +
> + rc = pci_parent_bus_reset(dev, probe);
> +
> +done:
> + /* CMA-SPDM state is lost upon a Conventional Reset */
> + if (!probe)
> + pci_cma_reauthenticate(dev);
> +
> + return rc;
> }
>
> void pci_dev_lock(struct pci_dev *dev)
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index 6c4755a2c91c..71092ccf4fbd 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -325,11 +325,16 @@ static inline void pci_doe_disconnected(struct pci_dev *pdev) { }
> #ifdef CONFIG_PCI_CMA
> void pci_cma_init(struct pci_dev *pdev);
> void pci_cma_destroy(struct pci_dev *pdev);
> +int pci_cma_reauthenticate(struct pci_dev *pdev);
> struct x509_certificate;
> int pci_cma_validate(struct device *dev, struct x509_certificate *leaf_cert);
> #else
> static inline void pci_cma_init(struct pci_dev *pdev) { }
> static inline void pci_cma_destroy(struct pci_dev *pdev) { }
> +static inline int pci_cma_reauthenticate(struct pci_dev *pdev)
> +{
> + return -ENOTTY;
> +}
> #endif
>
> /**
> diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
> index 59c90d04a609..4783bd907b54 100644
> --- a/drivers/pci/pcie/err.c
> +++ b/drivers/pci/pcie/err.c
> @@ -122,6 +122,9 @@ static int report_slot_reset(struct pci_dev *dev, void *data)
> pci_ers_result_t vote, *result = data;
> const struct pci_error_handlers *err_handler;
>
> + /* CMA-SPDM state is lost upon a Conventional Reset */
> + pci_cma_reauthenticate(dev);
> +
> device_lock(&dev->dev);
> pdrv = dev->driver;
> if (!pdrv ||
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 0c0123317df6..2bc11d8b567e 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -519,6 +519,7 @@ struct pci_dev {
> #endif
> #ifdef CONFIG_PCI_CMA
> struct spdm_state *spdm_state; /* Security Protocol and Data Model */
> + unsigned int cma_capable:1; /* Authentication supported */
Also this should I think move to the earlier patch where we know if it is supported
even though we don't use it until here.

> #endif
> u16 acs_cap; /* ACS Capability offset */
> phys_addr_t rom; /* Physical address if not from BAR */

2023-10-03 15:14:04

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 01/12] X.509: Make certificate parser public

On Thu, 28 Sep 2023 19:32:32 +0200
Lukas Wunner <[email protected]> wrote:

> The upcoming support for PCI device authentication with CMA-SPDM
> (PCIe r6.1 sec 6.31) requires validating the Subject Alternative Name
> in X.509 certificates.
>
> High-level functions for X.509 parsing such as key_create_or_update()
> throw away the internal, low-level struct x509_certificate after
> extracting the struct public_key and public_key_signature from it.
> The Subject Alternative Name is thus inaccessible when using those
> functions.
>
> Afford CMA-SPDM access to the Subject Alternative Name by making struct
> x509_certificate public, together with the functions for parsing an
> X.509 certificate into such a struct and freeing such a struct.
>
> The private header file x509_parser.h previously included <linux/time.h>
> for the definition of time64_t. That definition was since moved to
> <linux/time64.h> by commit 361a3bf00582 ("time64: Add time64.h header
> and define struct timespec64"), so adjust the #include directive as part
> of the move to the new public header file <keys/x509-parser.h>.
>
> No functional change intended.
>
> Signed-off-by: Lukas Wunner <[email protected]>
Got to where this is used now in my review and it makes sense there.
Reviewed-by: Jonathan Cameron <[email protected]>

> ---
> crypto/asymmetric_keys/x509_parser.h | 37 +----------------------
> include/keys/x509-parser.h | 44 ++++++++++++++++++++++++++++
> 2 files changed, 45 insertions(+), 36 deletions(-)
> create mode 100644 include/keys/x509-parser.h
>
> diff --git a/crypto/asymmetric_keys/x509_parser.h b/crypto/asymmetric_keys/x509_parser.h
> index a299c9c56f40..a7ef43c39002 100644
> --- a/crypto/asymmetric_keys/x509_parser.h
> +++ b/crypto/asymmetric_keys/x509_parser.h
> @@ -5,40 +5,7 @@
> * Written by David Howells ([email protected])
> */
>
> -#include <linux/time.h>
> -#include <crypto/public_key.h>
> -#include <keys/asymmetric-type.h>
> -
> -struct x509_certificate {
> - struct x509_certificate *next;
> - struct x509_certificate *signer; /* Certificate that signed this one */
> - struct public_key *pub; /* Public key details */
> - struct public_key_signature *sig; /* Signature parameters */
> - char *issuer; /* Name of certificate issuer */
> - char *subject; /* Name of certificate subject */
> - struct asymmetric_key_id *id; /* Issuer + Serial number */
> - struct asymmetric_key_id *skid; /* Subject + subjectKeyId (optional) */
> - time64_t valid_from;
> - time64_t valid_to;
> - const void *tbs; /* Signed data */
> - unsigned tbs_size; /* Size of signed data */
> - unsigned raw_sig_size; /* Size of signature */
> - const void *raw_sig; /* Signature data */
> - const void *raw_serial; /* Raw serial number in ASN.1 */
> - unsigned raw_serial_size;
> - unsigned raw_issuer_size;
> - const void *raw_issuer; /* Raw issuer name in ASN.1 */
> - const void *raw_subject; /* Raw subject name in ASN.1 */
> - unsigned raw_subject_size;
> - unsigned raw_skid_size;
> - const void *raw_skid; /* Raw subjectKeyId in ASN.1 */
> - unsigned index;
> - bool seen; /* Infinite recursion prevention */
> - bool verified;
> - bool self_signed; /* T if self-signed (check unsupported_sig too) */
> - bool unsupported_sig; /* T if signature uses unsupported crypto */
> - bool blacklisted;
> -};
> +#include <keys/x509-parser.h>
>
> /*
> * selftest.c
> @@ -52,8 +19,6 @@ static inline int fips_signature_selftest(void) { return 0; }
> /*
> * x509_cert_parser.c
> */
> -extern void x509_free_certificate(struct x509_certificate *cert);
> -extern struct x509_certificate *x509_cert_parse(const void *data, size_t datalen);
> extern int x509_decode_time(time64_t *_t, size_t hdrlen,
> unsigned char tag,
> const unsigned char *value, size_t vlen);
> diff --git a/include/keys/x509-parser.h b/include/keys/x509-parser.h
> new file mode 100644
> index 000000000000..7c2ebc84791f
> --- /dev/null
> +++ b/include/keys/x509-parser.h
> @@ -0,0 +1,44 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +/* X.509 certificate parser
> + *
> + * Copyright (C) 2012 Red Hat, Inc. All Rights Reserved.
> + * Written by David Howells ([email protected])
> + */
> +
> +#include <crypto/public_key.h>
> +#include <keys/asymmetric-type.h>
> +#include <linux/time64.h>
> +
> +struct x509_certificate {
> + struct x509_certificate *next;
> + struct x509_certificate *signer; /* Certificate that signed this one */
> + struct public_key *pub; /* Public key details */
> + struct public_key_signature *sig; /* Signature parameters */
> + char *issuer; /* Name of certificate issuer */
> + char *subject; /* Name of certificate subject */
> + struct asymmetric_key_id *id; /* Issuer + Serial number */
> + struct asymmetric_key_id *skid; /* Subject + subjectKeyId (optional) */
> + time64_t valid_from;
> + time64_t valid_to;
> + const void *tbs; /* Signed data */
> + unsigned tbs_size; /* Size of signed data */
> + unsigned raw_sig_size; /* Size of signature */
> + const void *raw_sig; /* Signature data */
> + const void *raw_serial; /* Raw serial number in ASN.1 */
> + unsigned raw_serial_size;
> + unsigned raw_issuer_size;
> + const void *raw_issuer; /* Raw issuer name in ASN.1 */
> + const void *raw_subject; /* Raw subject name in ASN.1 */
> + unsigned raw_subject_size;
> + unsigned raw_skid_size;
> + const void *raw_skid; /* Raw subjectKeyId in ASN.1 */
> + unsigned index;
> + bool seen; /* Infinite recursion prevention */
> + bool verified;
> + bool self_signed; /* T if self-signed (check unsupported_sig too) */
> + bool unsupported_sig; /* T if signature uses unsupported crypto */
> + bool blacklisted;
> +};
> +
> +struct x509_certificate *x509_cert_parse(const void *data, size_t datalen);
> +void x509_free_certificate(struct x509_certificate *cert);

2023-10-03 15:14:18

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 02/12] X.509: Parse Subject Alternative Name in certificates

On Thu, 28 Sep 2023 19:32:32 +0200
Lukas Wunner <[email protected]> wrote:

> The upcoming support for PCI device authentication with CMA-SPDM
> (PCIe r6.1 sec 6.31) requires validating the Subject Alternative Name
> in X.509 certificates.
>
> Store a pointer to the Subject Alternative Name upon parsing for
> consumption by CMA-SPDM.
>
> Signed-off-by: Lukas Wunner <[email protected]>

Reviewed-by: Jonathan Cameron <[email protected]>

> ---
> crypto/asymmetric_keys/x509_cert_parser.c | 15 +++++++++++++++
> include/keys/x509-parser.h | 2 ++
> 2 files changed, 17 insertions(+)
>
> diff --git a/crypto/asymmetric_keys/x509_cert_parser.c b/crypto/asymmetric_keys/x509_cert_parser.c
> index 0a7049b470c1..18dfd564740b 100644
> --- a/crypto/asymmetric_keys/x509_cert_parser.c
> +++ b/crypto/asymmetric_keys/x509_cert_parser.c
> @@ -579,6 +579,21 @@ int x509_process_extension(void *context, size_t hdrlen,
> return 0;
> }
>
> + if (ctx->last_oid == OID_subjectAltName) {
> + /*
> + * A certificate MUST NOT include more than one instance
> + * of a particular extension (RFC 5280 sec 4.2).
> + */
> + if (ctx->cert->raw_san) {
> + pr_err("Duplicate Subject Alternative Name\n");
> + return -EINVAL;
> + }
> +
> + ctx->cert->raw_san = v;
> + ctx->cert->raw_san_size = vlen;
> + return 0;
> + }
> +
> if (ctx->last_oid == OID_keyUsage) {
> /*
> * Get hold of the keyUsage bit string
> diff --git a/include/keys/x509-parser.h b/include/keys/x509-parser.h
> index 7c2ebc84791f..9c6e7cdf4870 100644
> --- a/include/keys/x509-parser.h
> +++ b/include/keys/x509-parser.h
> @@ -32,6 +32,8 @@ struct x509_certificate {
> unsigned raw_subject_size;
> unsigned raw_skid_size;
> const void *raw_skid; /* Raw subjectKeyId in ASN.1 */
> + const void *raw_san; /* Raw subjectAltName in ASN.1 */
> + unsigned raw_san_size;
> unsigned index;
> bool seen; /* Infinite recursion prevention */
> bool verified;

2023-10-03 15:28:54

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 11/12] PCI/CMA: Expose in sysfs whether devices are authenticated

On Thu, 28 Sep 2023 19:32:41 +0200
Lukas Wunner <[email protected]> wrote:

> The PCI core has just been amended to authenticate CMA-capable devices
> on enumeration and store the result in an "authenticated" bit in struct
> pci_dev->spdm_state.
>
> Expose the bit to user space through an eponymous sysfs attribute.
>
> Allow user space to trigger reauthentication (e.g. after it has updated
> the CMA keyring) by writing to the sysfs attribute.

Ah. That answers the question I asked in previous patch review ;)
Maybe add a comment to the cma_init code to say that's why it fails with
side effects (leaves the spdm_state around).

>
> Subject to further discussion, a future commit might add a user-defined
> policy to forbid driver binding to devices which failed authentication,
> similar to the "authorized" attribute for USB.
>
> Alternatively, authentication success might be signaled to user space
> through a uevent, whereupon it may bind a (blacklisted) driver.
> A uevent signaling authentication failure might similarly cause user
> space to unbind or outright remove the potentially malicious device.
>
> Traffic from devices which failed authentication could also be filtered
> through ACS I/O Request Blocking Enable (PCIe r6.1 sec 7.7.11.3) or
> through Link Disable (PCIe r6.1 sec 7.5.3.7). Unlike an IOMMU, that
> will not only protect the host, but also prevent malicious peer-to-peer
> traffic to other devices.
>
> Signed-off-by: Lukas Wunner <[email protected]>
Seems good to me, though I agree with Ilpo that it would be good to mention
the DOE init fail in the patch description as that's a bit subtle.

One trivial comment inline.

Reviewed-by: Jonathan Cameron <[email protected]>

> ---
> Documentation/ABI/testing/sysfs-bus-pci | 27 +++++++++
> drivers/pci/Kconfig | 3 +
> drivers/pci/Makefile | 1 +
> drivers/pci/cma-sysfs.c | 73 +++++++++++++++++++++++++
> drivers/pci/cma.c | 2 +
> drivers/pci/doe.c | 2 +
> drivers/pci/pci-sysfs.c | 3 +
> drivers/pci/pci.h | 1 +
> include/linux/pci.h | 2 +
> 9 files changed, 114 insertions(+)
> create mode 100644 drivers/pci/cma-sysfs.c
>
> diff --git a/Documentation/ABI/testing/sysfs-bus-pci b/Documentation/ABI/testing/sysfs-bus-pci
> index ecf47559f495..2ea9b8deffcc 100644
> --- a/Documentation/ABI/testing/sysfs-bus-pci
> +++ b/Documentation/ABI/testing/sysfs-bus-pci
> @@ -500,3 +500,30 @@ Description:
> console drivers from the device. Raw users of pci-sysfs
> resourceN attributes must be terminated prior to resizing.
> Success of the resizing operation is not guaranteed.
> +
> +What: /sys/bus/pci/devices/.../authenticated
> +Date: September 2023
> +Contact: Lukas Wunner <[email protected]>
> +Description:
> + This file contains 1 if the device authenticated successfully
> + with CMA-SPDM (PCIe r6.1 sec 6.31). It contains 0 if the
> + device failed authentication (and may thus be malicious).
> +
> + Writing anything to this file causes reauthentication.
> + That may be opportune after updating the .cma keyring.
> +
> + The file is not visible if authentication is unsupported
> + by the device.
> +
> + If the kernel could not determine whether authentication is
> + supported because memory was low or DOE communication with
> + the device was not working, the file is visible but accessing
> + it fails with error code ENOTTY.
> +
> + This prevents downgrade attacks where an attacker consumes
> + memory or disturbs DOE communication in order to create the
> + appearance that a device does not support authentication.
> +
> + The reason why authentication support could not be determined
> + is apparent from "dmesg". To probe for authentication support
> + again, exercise the "remove" and "rescan" attributes.
> diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
> index c9aa5253ac1f..51df3be3438e 100644
> --- a/drivers/pci/Kconfig
> +++ b/drivers/pci/Kconfig
> @@ -129,6 +129,9 @@ config PCI_CMA
> A PCI DOE mailbox is used as transport for DMTF SPDM based
> attestation, measurement and secure channel establishment.
>
> +config PCI_CMA_SYSFS
> + def_bool PCI_CMA && SYSFS
> +
> config PCI_DOE
> bool
>
> diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
> index a18812b8832b..612ae724cd2d 100644
> --- a/drivers/pci/Makefile
> +++ b/drivers/pci/Makefile
> @@ -35,6 +35,7 @@ obj-$(CONFIG_PCI_DOE) += doe.o
> obj-$(CONFIG_PCI_DYNAMIC_OF_NODES) += of_property.o
>
> obj-$(CONFIG_PCI_CMA) += cma.o cma-x509.o cma.asn1.o
> +obj-$(CONFIG_PCI_CMA_SYSFS) += cma-sysfs.o
> $(obj)/cma-x509.o: $(obj)/cma.asn1.h
> $(obj)/cma.asn1.o: $(obj)/cma.asn1.c $(obj)/cma.asn1.h
>
> diff --git a/drivers/pci/cma-sysfs.c b/drivers/pci/cma-sysfs.c
> new file mode 100644
> index 000000000000..b2d45f96601a
> --- /dev/null
> +++ b/drivers/pci/cma-sysfs.c
> @@ -0,0 +1,73 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Component Measurement and Authentication (CMA-SPDM, PCIe r6.1 sec 6.31)
> + *
> + * Copyright (C) 2023 Intel Corporation
> + */
> +
> +#include <linux/pci.h>
> +#include <linux/spdm.h>
> +#include <linux/sysfs.h>
> +
> +#include "pci.h"
> +
> +static ssize_t authenticated_store(struct device *dev,
> + struct device_attribute *attr,
> + const char *buf, size_t count)
> +{
> + struct pci_dev *pdev = to_pci_dev(dev);
> + ssize_t rc;
> +
> + if (!pdev->cma_capable &&
> + (pdev->cma_init_failed || pdev->doe_init_failed))
> + return -ENOTTY;
> +
> + rc = pci_cma_reauthenticate(pdev);
> + if (rc)
> + return rc;

> +
> + return count;
> +}
> +
> +static ssize_t authenticated_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + if (!pdev->cma_capable &&
> + (pdev->cma_init_failed || pdev->doe_init_failed))
> + return -ENOTTY;
> +
> + return sysfs_emit(buf, "%u\n", spdm_authenticated(pdev->spdm_state));
> +}
> +static DEVICE_ATTR_RW(authenticated);
> +
> +static struct attribute *pci_cma_attrs[] = {
> + &dev_attr_authenticated.attr,
> + NULL
> +};
> +
> +static umode_t pci_cma_attrs_are_visible(struct kobject *kobj,
> + struct attribute *a, int n)
> +{
> + struct device *dev = kobj_to_dev(kobj);
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + /*
> + * If CMA or DOE initialization failed, CMA attributes must be visible
> + * and return an error on access. This prevents downgrade attacks
> + * where an attacker disturbs memory allocation or DOE communication
> + * in order to create the appearance that CMA is unsupported.
> + * The attacker may achieve that by simply hogging memory.
> + */
> + if (!pdev->cma_capable &&
> + !pdev->cma_init_failed && !pdev->doe_init_failed)
> + return 0;
> +
> + return a->mode;
> +}
> +
> +const struct attribute_group pci_cma_attr_group = {
> + .attrs = pci_cma_attrs,

I'd go with a single space here as the double doesn't make
it any more readable.


> + .is_visible = pci_cma_attrs_are_visible,
> +};


2023-10-03 15:41:35

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 12/12] PCI/CMA: Grant guests exclusive control of authentication

On Thu, 28 Sep 2023 19:32:42 +0200
Lukas Wunner <[email protected]> wrote:

> At any given time, only a single entity in a physical system may have
> an SPDM connection to a device. That's because the GET_VERSION request
> (which begins an authentication sequence) resets "the connection and all
> context associated with that connection" (SPDM 1.3.0 margin no 158).
>
> Thus, when a device is passed through to a guest and the guest has
> authenticated it, a subsequent authentication by the host would reset
> the device's CMA-SPDM session behind the guest's back.
>
> Prevent by letting the guest claim exclusive CMA ownership of the device
> during passthrough. Refuse CMA reauthentication on the host as long.
> After passthrough has concluded, reauthenticate the device on the host.
>
> Store the flag indicating guest ownership in struct pci_dev's priv_flags
> to avoid the concurrency issues observed by commit 44bda4b7d26e ("PCI:
> Fix is_added/is_busmaster race condition").
>
> Side note: The Data Object Exchange r1.1 ECN (published Oct 11 2022)
> retrofits DOE with Connection IDs. In theory these allow simultaneous
> CMA-SPDM connections by multiple entities to the same device. But the
> first hardware generation capable of CMA-SPDM only supports DOE r1.0.
> The specification also neglects to reserve unique Connection IDs for
> hosts and guests, which further limits its usefulness.
>
> In general, forcing the transport to compensate for SPDM's lack of a
> connection identifier feels like a questionable layering violation.

Is there anything stopping a PF presenting multiple CMA capable DOE
instances? I'd expect them to have their own contexts if they do..

Something for the future if such a device shows up perhaps.

Otherwise this looks superficially fine to me, but I'll leave
giving tags to those more familiar with the VFIO side of things
and potential use cases etc.

Jonathan




>
> Signed-off-by: Lukas Wunner <[email protected]>
> Cc: Alex Williamson <[email protected]>
> ---
> drivers/pci/cma.c | 41 ++++++++++++++++++++++++++++++++
> drivers/pci/pci.h | 1 +
> drivers/vfio/pci/vfio_pci_core.c | 9 +++++--
> include/linux/pci.h | 8 +++++++
> include/linux/spdm.h | 2 ++
> lib/spdm_requester.c | 11 +++++++++
> 6 files changed, 70 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/pci/cma.c b/drivers/pci/cma.c
> index c539ad85a28f..b3eee137ffe2 100644
> --- a/drivers/pci/cma.c
> +++ b/drivers/pci/cma.c
> @@ -82,9 +82,50 @@ int pci_cma_reauthenticate(struct pci_dev *pdev)
> if (!pdev->cma_capable)
> return -ENOTTY;
>
> + if (test_bit(PCI_CMA_OWNED_BY_GUEST, &pdev->priv_flags))
> + return -EPERM;
> +
> return spdm_authenticate(pdev->spdm_state);
> }
>
> +#if IS_ENABLED(CONFIG_VFIO_PCI_CORE)
> +/**
> + * pci_cma_claim_ownership() - Claim exclusive CMA-SPDM control for guest VM
> + * @pdev: PCI device
> + *
> + * Claim exclusive CMA-SPDM control for a guest virtual machine before
> + * passthrough of @pdev. The host refrains from performing CMA-SPDM
> + * authentication of the device until passthrough has concluded.
> + *
> + * Necessary because the GET_VERSION request resets the SPDM connection
> + * and DOE r1.0 allows only a single SPDM connection for the entire system.
> + * So the host could reset the guest's SPDM connection behind the guest's back.
> + */
> +void pci_cma_claim_ownership(struct pci_dev *pdev)
> +{
> + set_bit(PCI_CMA_OWNED_BY_GUEST, &pdev->priv_flags);
> +
> + if (pdev->cma_capable)
> + spdm_await(pdev->spdm_state);
> +}
> +EXPORT_SYMBOL(pci_cma_claim_ownership);
> +
> +/**
> + * pci_cma_return_ownership() - Relinquish CMA-SPDM control to the host
> + * @pdev: PCI device
> + *
> + * Relinquish CMA-SPDM control to the host after passthrough of @pdev to a
> + * guest virtual machine has concluded.
> + */
> +void pci_cma_return_ownership(struct pci_dev *pdev)
> +{
> + clear_bit(PCI_CMA_OWNED_BY_GUEST, &pdev->priv_flags);
> +
> + pci_cma_reauthenticate(pdev);
> +}
> +EXPORT_SYMBOL(pci_cma_return_ownership);
> +#endif
> +
> void pci_cma_destroy(struct pci_dev *pdev)
> {
> if (pdev->spdm_state)
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index d80cc06be0cc..05ae6359b152 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -388,6 +388,7 @@ static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
> #define PCI_DEV_ADDED 0
> #define PCI_DPC_RECOVERED 1
> #define PCI_DPC_RECOVERING 2
> +#define PCI_CMA_OWNED_BY_GUEST 3
>
> static inline void pci_dev_assign_added(struct pci_dev *dev, bool added)
> {
> diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
> index 1929103ee59a..6f300664a342 100644
> --- a/drivers/vfio/pci/vfio_pci_core.c
> +++ b/drivers/vfio/pci/vfio_pci_core.c
> @@ -487,10 +487,12 @@ int vfio_pci_core_enable(struct vfio_pci_core_device *vdev)
> if (ret)
> goto out_power;
>
> + pci_cma_claim_ownership(pdev);
> +
> /* If reset fails because of the device lock, fail this path entirely */
> ret = pci_try_reset_function(pdev);
> if (ret == -EAGAIN)
> - goto out_disable_device;
> + goto out_cma_return;
>
> vdev->reset_works = !ret;
> pci_save_state(pdev);
> @@ -549,7 +551,8 @@ int vfio_pci_core_enable(struct vfio_pci_core_device *vdev)
> out_free_state:
> kfree(vdev->pci_saved_state);
> vdev->pci_saved_state = NULL;
> -out_disable_device:
> +out_cma_return:
> + pci_cma_return_ownership(pdev);
> pci_disable_device(pdev);
> out_power:
> if (!disable_idle_d3)
> @@ -678,6 +681,8 @@ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev)
>
> vfio_pci_dev_set_try_reset(vdev->vdev.dev_set);
>
> + pci_cma_return_ownership(pdev);
> +
> /* Put the pm-runtime usage counter acquired during enable */
> if (!disable_idle_d3)
> pm_runtime_put(&pdev->dev);
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 2c5fde81bb85..c14ea0e74fc4 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -2386,6 +2386,14 @@ static inline resource_size_t pci_iov_resource_size(struct pci_dev *dev, int res
> static inline void pci_vf_drivers_autoprobe(struct pci_dev *dev, bool probe) { }
> #endif
>
> +#ifdef CONFIG_PCI_CMA
> +void pci_cma_claim_ownership(struct pci_dev *pdev);
> +void pci_cma_return_ownership(struct pci_dev *pdev);
> +#else
> +static inline void pci_cma_claim_ownership(struct pci_dev *pdev) { }
> +static inline void pci_cma_return_ownership(struct pci_dev *pdev) { }
> +#endif
> +
> #if defined(CONFIG_HOTPLUG_PCI) || defined(CONFIG_HOTPLUG_PCI_MODULE)
> void pci_hp_create_module_link(struct pci_slot *pci_slot);
> void pci_hp_remove_module_link(struct pci_slot *pci_slot);
> diff --git a/include/linux/spdm.h b/include/linux/spdm.h
> index 69a83bc2eb41..d796127fbe9a 100644
> --- a/include/linux/spdm.h
> +++ b/include/linux/spdm.h
> @@ -34,6 +34,8 @@ int spdm_authenticate(struct spdm_state *spdm_state);
>
> bool spdm_authenticated(struct spdm_state *spdm_state);
>
> +void spdm_await(struct spdm_state *spdm_state);
> +
> void spdm_destroy(struct spdm_state *spdm_state);
>
> #endif
> diff --git a/lib/spdm_requester.c b/lib/spdm_requester.c
> index b2af2074ba6f..99424d6aebf5 100644
> --- a/lib/spdm_requester.c
> +++ b/lib/spdm_requester.c
> @@ -1483,6 +1483,17 @@ struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
> }
> EXPORT_SYMBOL_GPL(spdm_create);
>
> +/**
> + * spdm_await() - Wait for ongoing spdm_authenticate() to finish
> + *
> + * @spdm_state: SPDM session state
> + */
> +void spdm_await(struct spdm_state *spdm_state)
> +{
> + mutex_lock(&spdm_state->lock);
> + mutex_unlock(&spdm_state->lock);
> +}
> +
> /**
> * spdm_destroy() - Destroy SPDM session
> *

2023-10-03 19:41:24

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 12/12] PCI/CMA: Grant guests exclusive control of authentication

On Tue, Oct 03, 2023 at 04:40:48PM +0100, Jonathan Cameron wrote:
> On Thu, 28 Sep 2023 19:32:42 +0200 Lukas Wunner <[email protected]> wrote:
> > At any given time, only a single entity in a physical system may have
> > an SPDM connection to a device. That's because the GET_VERSION request
> > (which begins an authentication sequence) resets "the connection and all
> > context associated with that connection" (SPDM 1.3.0 margin no 158).
> >
> > Thus, when a device is passed through to a guest and the guest has
> > authenticated it, a subsequent authentication by the host would reset
> > the device's CMA-SPDM session behind the guest's back.
> >
> > Prevent by letting the guest claim exclusive CMA ownership of the device
> > during passthrough. Refuse CMA reauthentication on the host as long.
> > After passthrough has concluded, reauthenticate the device on the host.
>
> Is there anything stopping a PF presenting multiple CMA capable DOE
> instances? I'd expect them to have their own contexts if they do..

The spec does not seem to *explicitly* forbid a PF having multiple
CMA-capable DOE instances, but PCIe r6.1 sec 6.31.3 says:
"The instance of DOE used for CMA-SPDM must support ..."

Note the singular ("The instance"). It seems to suggest that the
spec authors assumed there's only a single DOE instance for CMA-SPDM.

Could you (as an English native speaker) comment on the clarity of the
two sentences "Prevent ... as long." above, as Ilpo objected to them?

The antecedent of "Prevent" is the undesirable behaviour in the preceding
sentence (host resets guest's SPDM connection).

The antecedent of "as long" is "during passthrough" in the preceding
sentence.

Is that clear and understandable for an English native speaker or
should I rephrase?

Thanks,

Lukas

2023-10-03 22:52:10

by Wilfred Mallawa

[permalink] [raw]
Subject: Re: [PATCH 02/12] X.509: Parse Subject Alternative Name in certificates

On Tue, 2023-10-03 at 11:31 +0300, Ilpo Järvinen wrote:
> On Thu, 28 Sep 2023, Lukas Wunner wrote:
>
> > The upcoming support for PCI device authentication with CMA-SPDM
> > (PCIe r6.1 sec 6.31) requires validating the Subject Alternative
> > Name
> > in X.509 certificates.
> >
> > Store a pointer to the Subject Alternative Name upon parsing for
> > consumption by CMA-SPDM.
> >
> > Signed-off-by: Lukas Wunner <[email protected]>
> > ---
> >  crypto/asymmetric_keys/x509_cert_parser.c | 15 +++++++++++++++
> >  include/keys/x509-parser.h                |  2 ++
> >  2 files changed, 17 insertions(+)
> >
> > diff --git a/crypto/asymmetric_keys/x509_cert_parser.c
> > b/crypto/asymmetric_keys/x509_cert_parser.c
> > index 0a7049b470c1..18dfd564740b 100644
> > --- a/crypto/asymmetric_keys/x509_cert_parser.c
> > +++ b/crypto/asymmetric_keys/x509_cert_parser.c
> > @@ -579,6 +579,21 @@ int x509_process_extension(void *context,
> > size_t hdrlen,
> >                 return 0;
> >         }
> >  
> > +       if (ctx->last_oid == OID_subjectAltName) {
> > +               /*
> > +                * A certificate MUST NOT include more than one
> > instance
> > +                * of a particular extension (RFC 5280 sec 4.2).
> > +                */
> > +               if (ctx->cert->raw_san) {
> > +                       pr_err("Duplicate Subject Alternative
> > Name\n");
> > +                       return -EINVAL;
> > +               }
> > +
> > +               ctx->cert->raw_san = v;
> > +               ctx->cert->raw_san_size = vlen;
> > +               return 0;
> > +       }
> > +
> >         if (ctx->last_oid == OID_keyUsage) {
> >                 /*
> >                  * Get hold of the keyUsage bit string
> > diff --git a/include/keys/x509-parser.h b/include/keys/x509-
> > parser.h
> > index 7c2ebc84791f..9c6e7cdf4870 100644
> > --- a/include/keys/x509-parser.h
> > +++ b/include/keys/x509-parser.h
> > @@ -32,6 +32,8 @@ struct x509_certificate {
> >         unsigned        raw_subject_size;
> >         unsigned        raw_skid_size;
> >         const void      *raw_skid;              /* Raw subjectKeyId
> > in ASN.1 */
> > +       const void      *raw_san;               /* Raw
> > subjectAltName in ASN.1 */
> > +       unsigned        raw_san_size;
> >         unsigned        index;
> >         bool            seen;                   /* Infinite
> > recursion prevention */
> >         bool            verified;
> >
>
> Reviewed-by: Ilpo Järvinen <[email protected]>
Reviewed-by: Wilfred Mallawa <[email protected]>
>

2023-10-03 22:54:08

by Wilfred Mallawa

[permalink] [raw]
Subject: Re: [PATCH 04/12] certs: Create blacklist keyring earlier

On Tue, 2023-10-03 at 11:37 +0300, Ilpo Järvinen wrote:
> On Thu, 28 Sep 2023, Lukas Wunner wrote:
>
> > The upcoming support for PCI device authentication with CMA-SPDM
> > (PCIe r6.1 sec 6.31) requires parsing X.509 certificates upon
> > device enumeration, which happens in a subsys_initcall().
> >
> > Parsing X.509 certificates accesses the blacklist keyring:
> > x509_cert_parse()
> >   x509_get_sig_params()
> >     is_hash_blacklisted()
> >       keyring_search()
> >
> > So far the keyring is created much later in a device_initcall(). 
> > Avoid
> > a NULL pointer dereference on access to the keyring by creating it
> > one
> > initcall level earlier than PCI device enumeration, i.e. in an
> > arch_initcall().
> >
> > Signed-off-by: Lukas Wunner <[email protected]>
> > ---
> >  certs/blacklist.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/certs/blacklist.c b/certs/blacklist.c
> > index 675dd7a8f07a..34185415d451 100644
> > --- a/certs/blacklist.c
> > +++ b/certs/blacklist.c
> > @@ -311,7 +311,7 @@ static int restrict_link_for_blacklist(struct
> > key *dest_keyring,
> >   * Initialise the blacklist
> >   *
> >   * The blacklist_init() function is registered as an initcall via
> > - * device_initcall().  As a result if the blacklist_init()
> > function fails for
> > + * arch_initcall().  As a result if the blacklist_init() function
> > fails for
> >   * any reason the kernel continues to execute.  While cleanly
> > returning -ENODEV
> >   * could be acceptable for some non-critical kernel parts, if the
> > blacklist
> >   * keyring fails to load it defeats the certificate/key based deny
> > list for
> > @@ -356,7 +356,7 @@ static int __init blacklist_init(void)
> >  /*
> >   * Must be initialised before we try and load the keys into the
> > keyring.
> >   */
> > -device_initcall(blacklist_init);
> > +arch_initcall(blacklist_init);
> >  
> >  #ifdef CONFIG_SYSTEM_REVOCATION_LIST
> >  /*
> >
>
> Reviewed-by: Ilpo Järvinen <[email protected]>
Reviewed-by: Wilfred Mallawa <[email protected]>
>

2023-10-05 15:58:39

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 09/12] PCI/CMA: Validate Subject Alternative Name in certificates

On Tue, Oct 03, 2023 at 04:04:55PM +0100, Jonathan Cameron wrote:
> On Thu, 28 Sep 2023 19:32:39 +0200 Lukas Wunner <[email protected]> wrote:
> > PCIe r6.1 sec 6.31.3 stipulates requirements for X.509 Leaf Certificates
> > presented by devices, in particular the presence of a Subject Alternative
> > Name extension with a name that encodes the Vendor ID, Device ID, Device
> > Serial Number, etc.
>
> Lets you do any of
> * What you have here
> * Reference Integrity Manifest, e.g. see Trusted Computing Group
> * A pointer to a location where such a Reference Integrity Manifest can be
> obtained.
>
> So this text feels a little strong though I'm fine with only support the
> Subject Alternative Name bit for now. Whoever has one of the other options
> can add that support :)

I intend to amend the commit message as follows. If anyone believes
this is inaccurate, please let me know:

Side note: Instead of a Subject Alternative Name, Leaf Certificates may
include "a Reference Integrity Manifest, e.g., see Trusted Computing
Group" or "a pointer to a location where such a Reference Integrity
Manifest can be obtained" (PCIe r6.1 sec 6.31.3).

A Reference Integrity Manifest contains "golden" measurements which can
be compared to actual measurements retrieved from a device. It serves a
different purpose than the Subject Alternative Name, hence it is unclear
why the spec says only either of them is necessary. It is also unclear
how a Reference Integrity Manifest shall be encoded into a certificate.

Ignore the Reference Integrity Manifest requirement until this confusion
is resolved by a spec update.


> I haven't looked asn.1 recently enough to have any confidence on
> a review of that bit...
> So, for everything except the asn.1
> Reviewed-by: Jonathan Cameron <[email protected]>

In case it raises the confidence in that portion of the patch,
I have tested it successfully not just with certificates containing
a single CMA otherName, but also:

- a single otherName with a different OID
- multiple otherNames with a mix of CMA and other OIDs
- multiple otherNames plus additional unrelated dNSNames
- no Subject Alternative Name

Getting the IMPLICIT annotations right was a bit nontrivial.
It turned out that the existing crypto/asymmetric_keys/x509_akid.asn1
got that wrong as well, so I fixed it up as a byproduct of this series:

https://git.kernel.org/herbert/cryptodev-2.6/c/a1e452026e6d

The debug experience made me appreciate the kernel's ASN.1 compiler
and parser though: Their code is surprisingly small, the generated
output of the compiler is quite readable and the split architecture
with a compiler+parser feels much safer than what openssl does.

Thanks,

Lukas

2023-10-05 20:09:30

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: [PATCH 09/12] PCI/CMA: Validate Subject Alternative Name in certificates

On Thu, Oct 05, 2023 at 04:04:47PM +0200, Lukas Wunner wrote:
> On Tue, Oct 03, 2023 at 04:04:55PM +0100, Jonathan Cameron wrote:
> > On Thu, 28 Sep 2023 19:32:39 +0200 Lukas Wunner <[email protected]> wrote:
> > > PCIe r6.1 sec 6.31.3 stipulates requirements for X.509 Leaf Certificates

The PCIe spec does not contain "X.509", so I assume this is sort of a
transitive requirement from SPDM.

> > > presented by devices, in particular the presence of a Subject Alternative
> > > Name extension with a name that encodes the Vendor ID, Device ID, Device
> > > Serial Number, etc.
> >
> > Lets you do any of
> > * What you have here
> > * Reference Integrity Manifest, e.g. see Trusted Computing Group
> > * A pointer to a location where such a Reference Integrity Manifest can be
> > obtained.
> >
> > So this text feels a little strong though I'm fine with only support the
> > Subject Alternative Name bit for now. Whoever has one of the other options
> > can add that support :)
>
> I intend to amend the commit message as follows. If anyone believes
> this is inaccurate, please let me know:
>
> Side note: Instead of a Subject Alternative Name, Leaf Certificates may
> include "a Reference Integrity Manifest, e.g., see Trusted Computing
> Group" or "a pointer to a location where such a Reference Integrity
> Manifest can be obtained" (PCIe r6.1 sec 6.31.3).
>
> A Reference Integrity Manifest contains "golden" measurements which can
> be compared to actual measurements retrieved from a device. It serves a
> different purpose than the Subject Alternative Name, hence it is unclear
> why the spec says only either of them is necessary. It is also unclear
> how a Reference Integrity Manifest shall be encoded into a certificate.
>
> Ignore the Reference Integrity Manifest requirement until this confusion
> is resolved by a spec update.

Thanks for this; I was about to comment the same.

Bjorn

2023-10-05 20:11:22

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: [PATCH 08/12] PCI/CMA: Authenticate devices on enumeration

On Thu, Sep 28, 2023 at 07:32:38PM +0200, Lukas Wunner wrote:
> From: Jonathan Cameron <[email protected]>
>
> Component Measurement and Authentication (CMA, PCIe r6.1 sec 6.31)
> allows for measurement and authentication of PCIe devices. It is
> based on the Security Protocol and Data Model specification (SPDM,
> https://www.dmtf.org/dsp/DSP0274).

> +#define dev_fmt(fmt) "CMA: " fmt

> +void pci_cma_init(struct pci_dev *pdev)
> +{
> + struct pci_doe_mb *doe;
> + int rc;
> +
> + if (!pci_cma_keyring) {
> + return;
> + }

Jonathan mentioned the extra brackets below; here's another.

> + if (!pdev->spdm_state) {
> + return;
> + }

> + if (IS_ERR(pci_cma_keyring)) {
> + pr_err("Could not allocate keyring\n");

There's a #define dev_fmt(fmt) above, but I don't think it's used in
this patch. I think this would need something like:

#define pr_fmt(fmt) "PCI: CMA: " fmt

2023-10-05 20:20:34

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: [PATCH 11/12] PCI/CMA: Expose in sysfs whether devices are authenticated

On Thu, Sep 28, 2023 at 07:32:41PM +0200, Lukas Wunner wrote:
> The PCI core has just been amended to authenticate CMA-capable devices
> on enumeration and store the result in an "authenticated" bit in struct
> pci_dev->spdm_state.

> drivers/pci/cma-sysfs.c | 73 +++++++++++++++++++++++++

Not really sure it's worth splitting this into cma.c, cma-sysfs.c,
cma-x509.c. They're all tiny and ping-ponging between files is a bit
of a hassle:

151 drivers/pci/cma.c
73 drivers/pci/cma-sysfs.c
119 drivers/pci/cma-x509.c

Bjorn

2023-10-05 20:34:52

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: [PATCH 12/12] PCI/CMA: Grant guests exclusive control of authentication

On Tue, Oct 03, 2023 at 09:30:58PM +0200, Lukas Wunner wrote:
> On Tue, Oct 03, 2023 at 04:40:48PM +0100, Jonathan Cameron wrote:
> > On Thu, 28 Sep 2023 19:32:42 +0200 Lukas Wunner <[email protected]> wrote:
> > > At any given time, only a single entity in a physical system may have
> > > an SPDM connection to a device. That's because the GET_VERSION request
> > > (which begins an authentication sequence) resets "the connection and all
> > > context associated with that connection" (SPDM 1.3.0 margin no 158).
> > >
> > > Thus, when a device is passed through to a guest and the guest has
> > > authenticated it, a subsequent authentication by the host would reset
> > > the device's CMA-SPDM session behind the guest's back.
> > >
> > > Prevent by letting the guest claim exclusive CMA ownership of the device
> > > during passthrough. Refuse CMA reauthentication on the host as long.
> > > After passthrough has concluded, reauthenticate the device on the host.

> Could you (as an English native speaker) comment on the clarity of the
> two sentences "Prevent ... as long." above, as Ilpo objected to them?
>
> The antecedent of "Prevent" is the undesirable behaviour in the preceding
> sentence (host resets guest's SPDM connection).

I think this means "prevent a reauthentication by the host behind the
guest's back" (which seems to match the first diff hunk), but I agree
it would be helpful to make the connection clearer, e.g.,

When passing a device through to a guest, mark it as "CMA owned
exclusively by the guest" for the duration of the passthrough to
prevent the host from reauthenticating and resetting the device's
CMA-SPDM session.

> The antecedent of "as long" is "during passthrough" in the preceding
> sentence.

"as long" definitely needs something to connect it with the
passthrough.

Bjorn

2023-10-06 09:31:30

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 12/12] PCI/CMA: Grant guests exclusive control of authentication

On Tue, 3 Oct 2023 21:30:58 +0200
Lukas Wunner <[email protected]> wrote:

> On Tue, Oct 03, 2023 at 04:40:48PM +0100, Jonathan Cameron wrote:
> > On Thu, 28 Sep 2023 19:32:42 +0200 Lukas Wunner <[email protected]> wrote:
> > > At any given time, only a single entity in a physical system may have
> > > an SPDM connection to a device. That's because the GET_VERSION request
> > > (which begins an authentication sequence) resets "the connection and all
> > > context associated with that connection" (SPDM 1.3.0 margin no 158).
> > >
> > > Thus, when a device is passed through to a guest and the guest has
> > > authenticated it, a subsequent authentication by the host would reset
> > > the device's CMA-SPDM session behind the guest's back.
> > >
> > > Prevent by letting the guest claim exclusive CMA ownership of the device
> > > during passthrough. Refuse CMA reauthentication on the host as long.
> > > After passthrough has concluded, reauthenticate the device on the host.
> >
> > Is there anything stopping a PF presenting multiple CMA capable DOE
> > instances? I'd expect them to have their own contexts if they do..
>
> The spec does not seem to *explicitly* forbid a PF having multiple
> CMA-capable DOE instances, but PCIe r6.1 sec 6.31.3 says:
> "The instance of DOE used for CMA-SPDM must support ..."
>
> Note the singular ("The instance"). It seems to suggest that the
> spec authors assumed there's only a single DOE instance for CMA-SPDM.

It's a little messy and a bit of American vs British English I think.
If it said
"The instance of DOE used for a specific CMA-SPDM must support..."
then it would clearly allow multiple instances. However, conversely,
I don't read that sentence as blocking multiple instances (even though
I suspect you are right and the author was thinking of there being one).

>
> Could you (as an English native speaker) comment on the clarity of the
> two sentences "Prevent ... as long." above, as Ilpo objected to them?
>
> The antecedent of "Prevent" is the undesirable behaviour in the preceding
> sentence (host resets guest's SPDM connection).
>
> The antecedent of "as long" is "during passthrough" in the preceding
> sentence.
>
> Is that clear and understandable for an English native speaker or
> should I rephrase?

Not clear enough to me as it stands. That "as long" definitely feels
like there is more to follow it as Ilpo noted.

Maybe reword as something like

Prevent this by letting the guest claim exclusive ownership of the device
during passthrough ensuring problematic CMA reauthentication by the host
is blocked.

Also combine this with previous paragraph to make the 'this' more obvious
refer to the problem described in that paragraph.

Jonathan

>
> Thanks,
>
> Lukas
>

2023-10-06 16:08:15

by Dan Williams

[permalink] [raw]
Subject: RE: [PATCH 00/12] PCI device authentication

This looks great Lukas, some forward looking review comments below.

Lukas Wunner wrote:
> Authenticate PCI devices with CMA-SPDM (PCIe r6.1 sec 6.31) and
> expose the result in sysfs. This enables user-defined policies
> such as forbidding driver binding to devices which failed
> authentication.
>
> CMA-SPDM forms the basis for PCI encryption (PCIe r6.1 sec 6.33),
> which will be submitted later.
>
> The meat of the series is in patches [07/12] and [08/12], which contain
> the SPDM library and the CMA glue code (the PCI-adaption of SPDM).
>
> The reason why SPDM is done in-kernel is provided in patch [10/12]:
> Briefly, when devices are reauthenticated on resume from system sleep,
> user space is not yet available. Same when reauthenticating after
> recovery from reset.
>
> One use case for CMA-SPDM and PCI encryption is confidential access
> to passed-through devices: Neither the host nor other guests are
> able to eavesdrop on device accesses, in particular if guest memory
> is encrypted as well.

Note, only for traffic over the SPDM session. In order for private MMIO and
T=1 traffic to private memory, coordination with the platform TSM is
mandated by all the known TSM (CPU/Platform security modules). This has
implications for policy decisions later in this series.

> Further use cases for the SPDM library are appearing on the horizon:
> Alistair Francis and Wilfred Mallawa from WDC are interested in using
> it for SCSI/SATA. David Box from Intel has implemented measurement
> retrieval over SPDM.
>
> The root of trust is initially an in-kernel key ring of certificates.
> We can discuss linking the system key ring into it, thereby allowing
> EFI to pass trusted certificates to the kernel for CMA. Alternatively,
> a bundle of trusted certificates could be loaded from the initrd.
> I envision that we'll add TPMs or remote attestation services such as
> https://keylime.dev/ to create an ecosystem of various trust sources.

Linux also has an interest in accommodating opt-in to using platform
managed keys, so the design requires that key management and session
ownership is a system owner policy choice.

> If you wish to play with PCI device authentication but lack capable
> hardware, Wilfred has written a guide how to test with qemu:
> https://github.com/twilfredo/spdm-emulation-guide-b
>

2023-10-06 18:47:53

by Dan Williams

[permalink] [raw]
Subject: RE: [PATCH 01/12] X.509: Make certificate parser public

Lukas Wunner wrote:
> The upcoming support for PCI device authentication with CMA-SPDM
> (PCIe r6.1 sec 6.31) requires validating the Subject Alternative Name
> in X.509 certificates.
>
> High-level functions for X.509 parsing such as key_create_or_update()
> throw away the internal, low-level struct x509_certificate after
> extracting the struct public_key and public_key_signature from it.
> The Subject Alternative Name is thus inaccessible when using those
> functions.
>
> Afford CMA-SPDM access to the Subject Alternative Name by making struct
> x509_certificate public, together with the functions for parsing an
> X.509 certificate into such a struct and freeing such a struct.
>
> The private header file x509_parser.h previously included <linux/time.h>
> for the definition of time64_t. That definition was since moved to
> <linux/time64.h> by commit 361a3bf00582 ("time64: Add time64.h header
> and define struct timespec64"), so adjust the #include directive as part
> of the move to the new public header file <keys/x509-parser.h>.
>
> No functional change intended.
>
> Signed-off-by: Lukas Wunner <[email protected]>

Looks good to me:

Reviewed-by: Dan Williams <[email protected]>

2023-10-06 19:10:11

by Dan Williams

[permalink] [raw]
Subject: RE: [PATCH 02/12] X.509: Parse Subject Alternative Name in certificates

Lukas Wunner wrote:
> The upcoming support for PCI device authentication with CMA-SPDM
> (PCIe r6.1 sec 6.31) requires validating the Subject Alternative Name
> in X.509 certificates.
>
> Store a pointer to the Subject Alternative Name upon parsing for
> consumption by CMA-SPDM.
>
> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> crypto/asymmetric_keys/x509_cert_parser.c | 15 +++++++++++++++
> include/keys/x509-parser.h | 2 ++
> 2 files changed, 17 insertions(+)

Looks ok to me,

Acked-by: Dan Williams <[email protected]>

2023-10-06 19:16:00

by Dan Williams

[permalink] [raw]
Subject: RE: [PATCH 03/12] X.509: Move certificate length retrieval into new helper

Lukas Wunner wrote:
> The upcoming in-kernel SPDM library (Security Protocol and Data Model,
> https://www.dmtf.org/dsp/DSP0274) needs to retrieve the length from
> ASN.1 DER-encoded X.509 certificates.
>
> Such code already exists in x509_load_certificate_list(), so move it
> into a new helper for reuse by SPDM.
>
> No functional change intended.
>
> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> crypto/asymmetric_keys/x509_loader.c | 38 +++++++++++++++++++---------
> include/keys/asymmetric-type.h | 2 ++
> 2 files changed, 28 insertions(+), 12 deletions(-)
>
> diff --git a/crypto/asymmetric_keys/x509_loader.c b/crypto/asymmetric_keys/x509_loader.c
> index a41741326998..121460a0de46 100644
> --- a/crypto/asymmetric_keys/x509_loader.c
> +++ b/crypto/asymmetric_keys/x509_loader.c
> @@ -4,28 +4,42 @@
> #include <linux/key.h>
> #include <keys/asymmetric-type.h>
>
> +int x509_get_certificate_length(const u8 *p, unsigned long buflen)
> +{
> + int plen;
> +
> + /* Each cert begins with an ASN.1 SEQUENCE tag and must be more
> + * than 256 bytes in size.
> + */
> + if (buflen < 4)
> + return -EINVAL;
> +
> + if (p[0] != 0x30 &&
> + p[1] != 0x82)
> + return -EINVAL;
> +
> + plen = (p[2] << 8) | p[3];
> + plen += 4;
> + if (plen > buflen)
> + return -EINVAL;
> +
> + return plen;
> +}
> +EXPORT_SYMBOL_GPL(x509_get_certificate_length);

Given CONFIG_PCI is a bool, is the export needed? Maybe save this export
until the modular consumer arrives, or identify the modular consumer in the
changelog?

Other than that:

Reviewed-by: Dan Williams <[email protected]>

2023-10-06 19:20:07

by Dan Williams

[permalink] [raw]
Subject: RE: [PATCH 04/12] certs: Create blacklist keyring earlier

Lukas Wunner wrote:
> The upcoming support for PCI device authentication with CMA-SPDM
> (PCIe r6.1 sec 6.31) requires parsing X.509 certificates upon
> device enumeration, which happens in a subsys_initcall().
>
> Parsing X.509 certificates accesses the blacklist keyring:
> x509_cert_parse()
> x509_get_sig_params()
> is_hash_blacklisted()
> keyring_search()
>
> So far the keyring is created much later in a device_initcall(). Avoid
> a NULL pointer dereference on access to the keyring by creating it one
> initcall level earlier than PCI device enumeration, i.e. in an
> arch_initcall().
>
> Signed-off-by: Lukas Wunner <[email protected]>

I was going to comment on s/blacklist/blocklist/, but the coding-style
recommendation is relative to new usage.

Reviewed-by: Dan Williams <[email protected]>

2023-10-06 19:25:03

by Dan Williams

[permalink] [raw]
Subject: RE: [PATCH 05/12] crypto: akcipher - Support more than one signature encoding

Lukas Wunner wrote:
> Currently only a single default signature encoding is supported per
> akcipher.
>
> A subsequent commit will allow a second encoding for ecdsa, namely P1363
> alternatively to X9.62.
>
> To accommodate for that, amend struct akcipher_request and struct
> crypto_akcipher_sync_data to store the desired signature encoding for
> verify and sign ops.
>
> Amend akcipher_request_set_crypt(), crypto_sig_verify() and
> crypto_sig_sign() with an additional parameter which specifies the
> desired signature encoding. Adjust all callers.
>
> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> crypto/akcipher.c | 2 +-
> crypto/asymmetric_keys/public_key.c | 4 ++--
> crypto/internal.h | 1 +
> crypto/rsa-pkcs1pad.c | 11 +++++++----
> crypto/sig.c | 6 ++++--
> crypto/testmgr.c | 8 +++++---
> crypto/testmgr.h | 1 +
> include/crypto/akcipher.h | 10 +++++++++-
> include/crypto/sig.h | 6 ++++--
> 9 files changed, 34 insertions(+), 15 deletions(-)

I can only review this in generic terms, I just wonder why this decided to
pass a string rather than an enum?

2023-10-06 20:34:36

by Dan Williams

[permalink] [raw]
Subject: RE: [PATCH 07/12] spdm: Introduce library to authenticate devices

Lukas Wunner wrote:
> From: Jonathan Cameron <[email protected]>
>
> The Security Protocol and Data Model (SPDM) allows for authentication,
> measurement, key exchange and encrypted sessions with devices.
>
> A commonly used term for authentication and measurement is attestation.
>
> SPDM was conceived by the Distributed Management Task Force (DMTF).
> Its specification defines a request/response protocol spoken between
> host and attached devices over a variety of transports:
>
> https://www.dmtf.org/dsp/DSP0274
>
> This implementation supports SPDM 1.0 through 1.3 (the latest version).
> It is designed to be transport-agnostic as the kernel already supports
> two different SPDM-capable transports:
>
> * PCIe Data Object Exchange (PCIe r6.1 sec 6.30, drivers/pci/doe.c)
> * Management Component Transport Protocol (MCTP,
> Documentation/networking/mctp.rst)
>
> Use cases for SPDM include, but are not limited to:
>
> * PCIe Component Measurement and Authentication (PCIe r6.1 sec 6.31)
> * Compute Express Link (CXL r3.0 sec 14.11.6)
> * Open Compute Project (Attestation of System Components r1.0)
> https://www.opencompute.org/documents/attestation-v1-0-20201104-pdf
>
> The initial focus of this implementation is enabling PCIe CMA device
> authentication. As such, only a subset of the SPDM specification is
> contained herein, namely the request/response sequence GET_VERSION,
> GET_CAPABILITIES, NEGOTIATE_ALGORITHMS, GET_DIGESTS, GET_CERTIFICATE
> and CHALLENGE.
>
> A simple API is provided for subsystems wishing to authenticate devices:
> spdm_create(), spdm_authenticate() (can be called repeatedly for
> reauthentication) and spdm_destroy(). Certificates presented by devices
> are validated against an in-kernel keyring of trusted root certificates.
> A pointer to the keyring is passed to spdm_create().
>
> The set of supported cryptographic algorithms is limited to those
> declared mandatory in PCIe r6.1 sec 6.31.3. Adding more algorithms
> is straightforward as long as the crypto subsystem supports them.
>
> Future commits will extend this implementation with support for
> measurement, key exchange and encrypted sessions.
>
> So far, only the SPDM requester role is implemented. Care was taken to
> allow for effortless addition of the responder role at a later stage.
> This could be needed for a PCIe host bridge operating in endpoint mode.
> The responder role will be able to reuse struct definitions and helpers
> such as spdm_create_combined_prefix(). Those can be moved to
> spdm_common.{h,c} files upon introduction of the responder role.
> For now, all is kept in a single source file to avoid polluting the
> global namespace with unnecessary symbols.

Since you are raising design considerations for the future reuse of this
code in the responder role, I will raise some considerations for future
reuse of this code with platform security modules (the TDISP specification
calls them TSMs).

>
> Credits: Jonathan wrote a proof-of-concept of this SPDM implementation.
> Lukas reworked it for upstream.
>
> Signed-off-by: Jonathan Cameron <[email protected]>
> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> MAINTAINERS | 9 +
> include/linux/spdm.h | 35 +
> lib/Kconfig | 15 +
> lib/Makefile | 2 +
> lib/spdm_requester.c | 1487 ++++++++++++++++++++++++++++++++++++++++++
> 5 files changed, 1548 insertions(+)
> create mode 100644 include/linux/spdm.h
> create mode 100644 lib/spdm_requester.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 90f13281d297..2591d2217d65 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -19299,6 +19299,15 @@ M: Security Officers <[email protected]>
> S: Supported
> F: Documentation/process/security-bugs.rst
>
> +SECURITY PROTOCOL AND DATA MODEL (SPDM)
> +M: Jonathan Cameron <[email protected]>
> +M: Lukas Wunner <[email protected]>
> +L: [email protected]
> +L: [email protected]
> +S: Maintained
> +F: include/linux/spdm.h
> +F: lib/spdm*
> +
> SECURITY SUBSYSTEM
> M: Paul Moore <[email protected]>
> M: James Morris <[email protected]>
> diff --git a/include/linux/spdm.h b/include/linux/spdm.h
> new file mode 100644
> index 000000000000..e824063793a7
> --- /dev/null
> +++ b/include/linux/spdm.h
> @@ -0,0 +1,35 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * DMTF Security Protocol and Data Model (SPDM)
> + * https://www.dmtf.org/dsp/DSP0274
> + *
> + * Copyright (C) 2021-22 Huawei
> + * Jonathan Cameron <[email protected]>
> + *
> + * Copyright (C) 2022-23 Intel Corporation
> + */
> +
> +#ifndef _SPDM_H_
> +#define _SPDM_H_
> +
> +#include <linux/types.h>
> +
> +struct key;
> +struct device;
> +struct spdm_state;
> +
> +typedef int (spdm_transport)(void *priv, struct device *dev,
> + const void *request, size_t request_sz,
> + void *response, size_t response_sz);

TIL that even though "typedef <type> (*<name>)" is the predominant kernel
style of declaring function pointer types, significant parts of the kernel
use this "typedef <type> (<name>)" and declare it a pointer at the usage
site.

> +
> +struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
> + void *transport_priv, u32 transport_sz,
> + struct key *keyring);
> +
> +int spdm_authenticate(struct spdm_state *spdm_state);
> +
> +bool spdm_authenticated(struct spdm_state *spdm_state);
> +
> +void spdm_destroy(struct spdm_state *spdm_state);
> +
> +#endif
> diff --git a/lib/Kconfig b/lib/Kconfig
> index c686f4adc124..3516cf1dad16 100644
> --- a/lib/Kconfig
> +++ b/lib/Kconfig
> @@ -764,3 +764,18 @@ config ASN1_ENCODER
>
> config POLYNOMIAL
> tristate
> +
> +config SPDM_REQUESTER
> + tristate
> + select KEYS
> + select ASYMMETRIC_KEY_TYPE
> + select ASYMMETRIC_PUBLIC_KEY_SUBTYPE
> + select X509_CERTIFICATE_PARSER
> + help
> + The Security Protocol and Data Model (SPDM) allows for authentication,
> + measurement, key exchange and encrypted sessions with devices. This
> + option enables support for the SPDM requester role.
> +
> + Crypto algorithms offered to SPDM responders are limited to those
> + enabled in .config. Drivers selecting SPDM_REQUESTER need to also
> + select any algorithms they deem mandatory.
> diff --git a/lib/Makefile b/lib/Makefile
> index 740109b6e2c8..d9ae58a9ca83 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -315,6 +315,8 @@ obj-$(CONFIG_PERCPU_TEST) += percpu_test.o
> obj-$(CONFIG_ASN1) += asn1_decoder.o
> obj-$(CONFIG_ASN1_ENCODER) += asn1_encoder.o
>
> +obj-$(CONFIG_SPDM_REQUESTER) += spdm_requester.o
> +
> obj-$(CONFIG_FONT_SUPPORT) += fonts/
>
> hostprogs := gen_crc32table
> diff --git a/lib/spdm_requester.c b/lib/spdm_requester.c
> new file mode 100644
> index 000000000000..407041036599
> --- /dev/null
> +++ b/lib/spdm_requester.c
[..]
> +struct spdm_error_rsp {
> + u8 version;
> + u8 code;
> + enum spdm_error_code error_code:8;
> + u8 error_data;
> +
> + u8 extended_error_data[];
> +} __packed;
> +
> +static int spdm_err(struct device *dev, struct spdm_error_rsp *rsp)
> +{

Why not an error_code_to_string() helper and then use dev_err() directly at
the call site? rsp->error_data could be conveyed uncoditionally, but maybe
that belies that I do not understand the need for filtering ->error_data.

> + switch (rsp->error_code) {
> + case spdm_invalid_request:
> + dev_err(dev, "Invalid request\n");

Setting the above comment aside, do you suspect these need to be
dev_err_ratelimited() if only because it is unclear whether a user of this
library will trigger screaming error messages?


> + return -EINVAL;
> + case spdm_invalid_session:
> + if (rsp->version == 0x11) {
> + dev_err(dev, "Invalid session %#x\n", rsp->error_data);
> + return -EINVAL;
> + }
> + break;
> + case spdm_busy:
> + dev_err(dev, "Busy\n");
> + return -EBUSY;
> + case spdm_unexpected_request:
> + dev_err(dev, "Unexpected request\n");
> + return -EINVAL;
> + case spdm_unspecified:
> + dev_err(dev, "Unspecified error\n");
> + return -EINVAL;
> + case spdm_decrypt_error:
> + dev_err(dev, "Decrypt error\n");
> + return -EIO;
> + case spdm_unsupported_request:
> + dev_err(dev, "Unsupported request %#x\n", rsp->error_data);
> + return -EINVAL;
> + case spdm_request_in_flight:
> + dev_err(dev, "Request in flight\n");
> + return -EINVAL;
> + case spdm_invalid_response_code:
> + dev_err(dev, "Invalid response code\n");
> + return -EINVAL;
> + case spdm_session_limit_exceeded:
> + dev_err(dev, "Session limit exceeded\n");
> + return -EBUSY;
> + case spdm_session_required:
> + dev_err(dev, "Session required\n");
> + return -EINVAL;
> + case spdm_reset_required:
> + dev_err(dev, "Reset required\n");
> + return -ERESTART;
> + case spdm_response_too_large:
> + dev_err(dev, "Response too large\n");
> + return -EINVAL;
> + case spdm_request_too_large:
> + dev_err(dev, "Request too large\n");
> + return -EINVAL;
> + case spdm_large_response:
> + dev_err(dev, "Large response\n");
> + return -EMSGSIZE;
> + case spdm_message_lost:
> + dev_err(dev, "Message lost\n");
> + return -EIO;
> + case spdm_invalid_policy:
> + dev_err(dev, "Invalid policy\n");
> + return -EINVAL;
> + case spdm_version_mismatch:
> + dev_err(dev, "Version mismatch\n");
> + return -EINVAL;
> + case spdm_response_not_ready:
> + dev_err(dev, "Response not ready\n");
> + return -EINPROGRESS;
> + case spdm_request_resynch:
> + dev_err(dev, "Request resynchronization\n");
> + return -ERESTART;
> + case spdm_operation_failed:
> + dev_err(dev, "Operation failed\n");
> + return -EINVAL;
> + case spdm_no_pending_requests:
> + return -ENOENT;
> + case spdm_vendor_defined_error:
> + dev_err(dev, "Vendor defined error\n");
> + return -EINVAL;
> + }
> +
> + dev_err(dev, "Undefined error %#x\n", rsp->error_code);
> + return -EINVAL;
> +}
> +
[..]
> +/**
> + * spdm_authenticate() - Authenticate device
> + *
> + * @spdm_state: SPDM session state
> + *
> + * Authenticate a device through a sequence of GET_VERSION, GET_CAPABILITIES,
> + * NEGOTIATE_ALGORITHMS, GET_DIGESTS, GET_CERTIFICATE and CHALLENGE exchanges.
> + *
> + * Perform internal locking to serialize multiple concurrent invocations.
> + * Can be called repeatedly for reauthentication.
> + *
> + * Return 0 on success or a negative errno. In particular, -EPROTONOSUPPORT
> + * indicates that authentication is not supported by the device.
> + */
> +int spdm_authenticate(struct spdm_state *spdm_state)
> +{
> + size_t transcript_sz;
> + void *transcript;
> + int rc = -ENOMEM;
> + u8 slot;
> +
> + mutex_lock(&spdm_state->lock);
> + spdm_reset(spdm_state);
> +
> + /*
> + * For CHALLENGE_AUTH signature verification, a hash is computed over
> + * all exchanged messages to detect modification by a man-in-the-middle
> + * or media error. However the hash algorithm is not known until the
> + * NEGOTIATE_ALGORITHMS response has been received. The preceding
> + * GET_VERSION and GET_CAPABILITIES exchanges are therefore stashed
> + * in a transcript buffer and consumed once the algorithm is known.
> + * The buffer size is sufficient for the largest possible messages with
> + * 255 version entries and the capability fields added by SPDM 1.2.
> + */
> + transcript = kzalloc(struct_size_t(struct spdm_get_version_rsp,
> + version_number_entries, 255) +
> + sizeof(struct spdm_get_capabilities_reqrsp) * 2,
> + GFP_KERNEL);
> + if (!transcript)
> + goto unlock;
> +
> + rc = spdm_get_version(spdm_state, transcript, &transcript_sz);
> + if (rc)
> + goto unlock;
> +
> + rc = spdm_get_capabilities(spdm_state, transcript + transcript_sz,
> + &transcript_sz);
> + if (rc)
> + goto unlock;
> +
> + rc = spdm_negotiate_algs(spdm_state, transcript, transcript_sz);
> + if (rc)
> + goto unlock;
> +
> + rc = spdm_get_digests(spdm_state);
> + if (rc)
> + goto unlock;
> +
> + for_each_set_bit(slot, &spdm_state->slot_mask, SPDM_SLOTS) {
> + rc = spdm_get_certificate(spdm_state, slot);

A forward looking comment here, how to structure this code for reuse when
end users opt their kernel into coordinating with a platform TSM? Since the
DOE mailbox can only be owned by 1 entity, I expect sdpdm_state could grow
additional operations beyond the raw transport. These operations would be
for higher-order flows, like "get certificates", where that operation may
be forwarded from guest-to-VMM-to-TSM, and where VMM and TSM manage the raw
transport to return the result to the guest.

Otherwise no other implementation comments from me, my eyes are not well
trained to spot misuse of the crypto apis.

2023-10-07 10:04:59

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On Fri, Oct 06, 2023 at 09:06:13AM -0700, Dan Williams wrote:
> Lukas Wunner wrote:
> > The root of trust is initially an in-kernel key ring of certificates.
> > We can discuss linking the system key ring into it, thereby allowing
> > EFI to pass trusted certificates to the kernel for CMA. Alternatively,
> > a bundle of trusted certificates could be loaded from the initrd.
> > I envision that we'll add TPMs or remote attestation services such as
> > https://keylime.dev/ to create an ecosystem of various trust sources.
>
> Linux also has an interest in accommodating opt-in to using platform
> managed keys, so the design requires that key management and session
> ownership is a system owner policy choice.

You're pointing out a gap in the specification:

There's an existing mechanism to negotiate which PCI features are
handled natively by the OS and which by platform firmware and that's
the _OSC Control Field (PCI Firmware Spec r3.3 table 4-5 and 4-6).

There are currently 10 features whose ownership is negotiated with _OSC,
examples are Hotplug control and DPC configuration control.

I propose adding an 11th bit to negotiate ownership of the CMA-SPDM
session.

Once that's added to the PCI Firmware Spec, amending the implementation
to honor it is trivial: Just check for platform ownership at the top
of pci_cma_init() and return.

Thanks,

Lukas

2023-10-07 14:46:58

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 05/12] crypto: akcipher - Support more than one signature encoding

On Fri, Oct 06, 2023 at 12:23:59PM -0700, Dan Williams wrote:
> Lukas Wunner wrote:
> > Currently only a single default signature encoding is supported per
> > akcipher.
> >
> > A subsequent commit will allow a second encoding for ecdsa, namely P1363
> > alternatively to X9.62.
> >
> > To accommodate for that, amend struct akcipher_request and struct
> > crypto_akcipher_sync_data to store the desired signature encoding for
> > verify and sign ops.
> >
> > Amend akcipher_request_set_crypt(), crypto_sig_verify() and
> > crypto_sig_sign() with an additional parameter which specifies the
> > desired signature encoding. Adjust all callers.
>
> I can only review this in generic terms, I just wonder why this decided to
> pass a string rather than an enum?

The keyctl user space interface passes strings and crypto/algapi.c
likewise uses strings to identify algorithms. It appears to be the
commonly used style in the crypto and keys subsystems. In particular,
security/keys/keyctl_pkey.c already uses strings for the signature
encoding.

I just tried to blend in with the existing code.
Happy to make adjustments if Herbert or David say so.

Thanks,

Lukas

2023-10-09 10:52:31

by Alexey Kardashevskiy

[permalink] [raw]
Subject: Re: [PATCH 12/12] PCI/CMA: Grant guests exclusive control of authentication


On 29/9/23 03:32, Lukas Wunner wrote:
> At any given time, only a single entity in a physical system may have
> an SPDM connection to a device. That's because the GET_VERSION request
> (which begins an authentication sequence) resets "the connection and all
> context associated with that connection" (SPDM 1.3.0 margin no 158).
>
> Thus, when a device is passed through to a guest and the guest has
> authenticated it, a subsequent authentication by the host would reset
> the device's CMA-SPDM session behind the guest's back.
>
> Prevent by letting the guest claim exclusive CMA ownership of the device
> during passthrough. Refuse CMA reauthentication on the host as long.
> After passthrough has concluded, reauthenticate the device on the host.
>
> Store the flag indicating guest ownership in struct pci_dev's priv_flags
> to avoid the concurrency issues observed by commit 44bda4b7d26e ("PCI:
> Fix is_added/is_busmaster race condition").
>
> Side note: The Data Object Exchange r1.1 ECN (published Oct 11 2022)
> retrofits DOE with Connection IDs. In theory these allow simultaneous
> CMA-SPDM connections by multiple entities to the same device. But the
> first hardware generation capable of CMA-SPDM only supports DOE r1.0.
> The specification also neglects to reserve unique Connection IDs for
> hosts and guests, which further limits its usefulness.
>
> In general, forcing the transport to compensate for SPDM's lack of a
> connection identifier feels like a questionable layering violation.
>
> Signed-off-by: Lukas Wunner <[email protected]>
> Cc: Alex Williamson <[email protected]>
> ---
> drivers/pci/cma.c | 41 ++++++++++++++++++++++++++++++++
> drivers/pci/pci.h | 1 +
> drivers/vfio/pci/vfio_pci_core.c | 9 +++++--
> include/linux/pci.h | 8 +++++++
> include/linux/spdm.h | 2 ++
> lib/spdm_requester.c | 11 +++++++++
> 6 files changed, 70 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/pci/cma.c b/drivers/pci/cma.c
> index c539ad85a28f..b3eee137ffe2 100644
> --- a/drivers/pci/cma.c
> +++ b/drivers/pci/cma.c
> @@ -82,9 +82,50 @@ int pci_cma_reauthenticate(struct pci_dev *pdev)
> if (!pdev->cma_capable)
> return -ENOTTY;
>
> + if (test_bit(PCI_CMA_OWNED_BY_GUEST, &pdev->priv_flags))
> + return -EPERM;
> +
> return spdm_authenticate(pdev->spdm_state);
> }
>
> +#if IS_ENABLED(CONFIG_VFIO_PCI_CORE)
> +/**
> + * pci_cma_claim_ownership() - Claim exclusive CMA-SPDM control for guest VM
> + * @pdev: PCI device
> + *
> + * Claim exclusive CMA-SPDM control for a guest virtual machine before
> + * passthrough of @pdev. The host refrains from performing CMA-SPDM
> + * authentication of the device until passthrough has concluded.
> + *
> + * Necessary because the GET_VERSION request resets the SPDM connection
> + * and DOE r1.0 allows only a single SPDM connection for the entire system.
> + * So the host could reset the guest's SPDM connection behind the guest's back.
> + */
> +void pci_cma_claim_ownership(struct pci_dev *pdev)
> +{
> + set_bit(PCI_CMA_OWNED_BY_GUEST, &pdev->priv_flags);
> +
> + if (pdev->cma_capable)
> + spdm_await(pdev->spdm_state);
> +}
> +EXPORT_SYMBOL(pci_cma_claim_ownership);
> +
> +/**
> + * pci_cma_return_ownership() - Relinquish CMA-SPDM control to the host
> + * @pdev: PCI device
> + *
> + * Relinquish CMA-SPDM control to the host after passthrough of @pdev to a
> + * guest virtual machine has concluded.
> + */
> +void pci_cma_return_ownership(struct pci_dev *pdev)
> +{
> + clear_bit(PCI_CMA_OWNED_BY_GUEST, &pdev->priv_flags);
> +
> + pci_cma_reauthenticate(pdev);
> +}
> +EXPORT_SYMBOL(pci_cma_return_ownership);
> +#endif
> +
> void pci_cma_destroy(struct pci_dev *pdev)
> {
> if (pdev->spdm_state)
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index d80cc06be0cc..05ae6359b152 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -388,6 +388,7 @@ static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
> #define PCI_DEV_ADDED 0
> #define PCI_DPC_RECOVERED 1
> #define PCI_DPC_RECOVERING 2
> +#define PCI_CMA_OWNED_BY_GUEST 3


In AMD SEV TIO, the PSP firmware creates an SPDM connection. What is the
expected way of managing such ownership, a new priv_flags bit + api for it?


>
> static inline void pci_dev_assign_added(struct pci_dev *dev, bool added)
> {
> diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
> index 1929103ee59a..6f300664a342 100644
> --- a/drivers/vfio/pci/vfio_pci_core.c
> +++ b/drivers/vfio/pci/vfio_pci_core.c
> @@ -487,10 +487,12 @@ int vfio_pci_core_enable(struct vfio_pci_core_device *vdev)
> if (ret)
> goto out_power;
>
> + pci_cma_claim_ownership(pdev);


and this one too - in our design the SPDM session ownership stays in the
PSP firmware. I understand that you are implementing a different thing
but this patch triggers SPDM setup and expects it to not disappear (for
example, in reset) so the PSP's SPDM needs to synchronize with this,
clear pdev->cma_capable, or a new flag, or add a blocking list to the
CMA driver. Thanks,


> +
> /* If reset fails because of the device lock, fail this path entirely */
> ret = pci_try_reset_function(pdev);
> if (ret == -EAGAIN)
> - goto out_disable_device;
> + goto out_cma_return;
>
> vdev->reset_works = !ret;
> pci_save_state(pdev);
> @@ -549,7 +551,8 @@ int vfio_pci_core_enable(struct vfio_pci_core_device *vdev)
> out_free_state:
> kfree(vdev->pci_saved_state);
> vdev->pci_saved_state = NULL;
> -out_disable_device:
> +out_cma_return:
> + pci_cma_return_ownership(pdev);
> pci_disable_device(pdev);
> out_power:
> if (!disable_idle_d3)
> @@ -678,6 +681,8 @@ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev)
>
> vfio_pci_dev_set_try_reset(vdev->vdev.dev_set);
>
> + pci_cma_return_ownership(pdev);
> +
> /* Put the pm-runtime usage counter acquired during enable */
> if (!disable_idle_d3)
> pm_runtime_put(&pdev->dev);
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 2c5fde81bb85..c14ea0e74fc4 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -2386,6 +2386,14 @@ static inline resource_size_t pci_iov_resource_size(struct pci_dev *dev, int res
> static inline void pci_vf_drivers_autoprobe(struct pci_dev *dev, bool probe) { }
> #endif
>
> +#ifdef CONFIG_PCI_CMA
> +void pci_cma_claim_ownership(struct pci_dev *pdev);
> +void pci_cma_return_ownership(struct pci_dev *pdev);
> +#else
> +static inline void pci_cma_claim_ownership(struct pci_dev *pdev) { }
> +static inline void pci_cma_return_ownership(struct pci_dev *pdev) { }
> +#endif
> +
> #if defined(CONFIG_HOTPLUG_PCI) || defined(CONFIG_HOTPLUG_PCI_MODULE)
> void pci_hp_create_module_link(struct pci_slot *pci_slot);
> void pci_hp_remove_module_link(struct pci_slot *pci_slot);
> diff --git a/include/linux/spdm.h b/include/linux/spdm.h
> index 69a83bc2eb41..d796127fbe9a 100644
> --- a/include/linux/spdm.h
> +++ b/include/linux/spdm.h
> @@ -34,6 +34,8 @@ int spdm_authenticate(struct spdm_state *spdm_state);
>
> bool spdm_authenticated(struct spdm_state *spdm_state);
>
> +void spdm_await(struct spdm_state *spdm_state);
> +
> void spdm_destroy(struct spdm_state *spdm_state);
>
> #endif
> diff --git a/lib/spdm_requester.c b/lib/spdm_requester.c
> index b2af2074ba6f..99424d6aebf5 100644
> --- a/lib/spdm_requester.c
> +++ b/lib/spdm_requester.c
> @@ -1483,6 +1483,17 @@ struct spdm_state *spdm_create(struct device *dev, spdm_transport *transport,
> }
> EXPORT_SYMBOL_GPL(spdm_create);
>
> +/**
> + * spdm_await() - Wait for ongoing spdm_authenticate() to finish
> + *
> + * @spdm_state: SPDM session state
> + */
> +void spdm_await(struct spdm_state *spdm_state)
> +{
> + mutex_lock(&spdm_state->lock);
> + mutex_unlock(&spdm_state->lock);
> +}
> +
> /**
> * spdm_destroy() - Destroy SPDM session
> *

--
Alexey


2023-10-09 11:33:55

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On Sat, 7 Oct 2023 12:04:33 +0200
Lukas Wunner <[email protected]> wrote:

> On Fri, Oct 06, 2023 at 09:06:13AM -0700, Dan Williams wrote:
> > Lukas Wunner wrote:
> > > The root of trust is initially an in-kernel key ring of certificates.
> > > We can discuss linking the system key ring into it, thereby allowing
> > > EFI to pass trusted certificates to the kernel for CMA. Alternatively,
> > > a bundle of trusted certificates could be loaded from the initrd.
> > > I envision that we'll add TPMs or remote attestation services such as
> > > https://keylime.dev/ to create an ecosystem of various trust sources.
> >
> > Linux also has an interest in accommodating opt-in to using platform
> > managed keys, so the design requires that key management and session
> > ownership is a system owner policy choice.
>
> You're pointing out a gap in the specification:
>
> There's an existing mechanism to negotiate which PCI features are
> handled natively by the OS and which by platform firmware and that's
> the _OSC Control Field (PCI Firmware Spec r3.3 table 4-5 and 4-6).
>
> There are currently 10 features whose ownership is negotiated with _OSC,
> examples are Hotplug control and DPC configuration control.
>
> I propose adding an 11th bit to negotiate ownership of the CMA-SPDM
> session.
>
> Once that's added to the PCI Firmware Spec, amending the implementation
> to honor it is trivial: Just check for platform ownership at the top
> of pci_cma_init() and return.

This might want to be a control over the specific DOE instance instead
of a general purpose CMA control (or maybe we want both).

There is no safe way to access a DOE to find out if it supports CMA
that doesn't potentially break another entity using the mailbox.
Given the DOE instances might be for something entirely different we
can't just decide not to use them at all based on a global control.

Any such control becomes messy when hotplug is taken into account.
I suppose we could do a _DSM based on BDF / path to device (to remain
stable across reenumeration) and config space offset to allow the OS
to say 'Hi other entity / firmware are you using this DOE instance?"
Kind of an OSC with parameters. Also includes the other way around that
the question tells the firmware that if it says "no you can't" the OS
will leave it alone until a reboot or similar - that potentially avoids
the problem that we access DOE instances already without taking care
about this (I dropped ball on this having raised it way back near start
of us adding DOE support.)

If we do want to do any of these, which spec is appropriate? Link it to PCI
and propose a PCI firmware spec update? (not sure they have a code
first process available) or make it somewhat generic and propose an
ACPI Code first change?

Jonathan

>
> Thanks,
>
> Lukas
>
>

2023-10-09 13:51:55

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On Mon, Oct 09, 2023 at 12:33:35PM +0100, Jonathan Cameron wrote:
> On Sat, 7 Oct 2023 12:04:33 +0200 Lukas Wunner <[email protected]> wrote:
> > On Fri, Oct 06, 2023 at 09:06:13AM -0700, Dan Williams wrote:
> > > Linux also has an interest in accommodating opt-in to using platform
> > > managed keys, so the design requires that key management and session
> > > ownership is a system owner policy choice.
> >
> > You're pointing out a gap in the specification:
> >
> > There's an existing mechanism to negotiate which PCI features are
> > handled natively by the OS and which by platform firmware and that's
> > the _OSC Control Field (PCI Firmware Spec r3.3 table 4-5 and 4-6).
> >
> > There are currently 10 features whose ownership is negotiated with _OSC,
> > examples are Hotplug control and DPC configuration control.
> >
> > I propose adding an 11th bit to negotiate ownership of the CMA-SPDM
> > session.
> >
> > Once that's added to the PCI Firmware Spec, amending the implementation
> > to honor it is trivial: Just check for platform ownership at the top
> > of pci_cma_init() and return.
>
> This might want to be a control over the specific DOE instance instead
> of a general purpose CMA control (or maybe we want both).
>
> There is no safe way to access a DOE to find out if it supports CMA
> that doesn't potentially break another entity using the mailbox.
> Given the DOE instances might be for something entirely different we
> can't just decide not to use them at all based on a global control.

Per PCIe r6.1 sec 6.31.3, the DOE instance used for CMA-SPDM must support
"no other data object protocol(s)" besides DOE discovery, CMA-SPDM and
Secured CMA-SPDM.

So if the platform doesn't grant the OS control over that DOE instance,
unrelated DOE instances and protocols (such as CDAT retrieval) are not
affected.

E.g. PCI Firmware Spec r3.3 table 4-5 could be amended with something
along the lines of:

Control Field Bit Offset: 11

Interpretation: PCI Express Component Measurement and Authentication control

The operating system sets this bit to 1 to request control over the
DOE instance supporting the CMA-SPDM feature.

You're right that to discover the DOE instance for CMA-SPDM in the
first place, it needs to be accessed, which might interfere with the
firmware using it. Perhaps this can be solved with the DOE Busy bit.


> Any such control becomes messy when hotplug is taken into account.
> I suppose we could do a _DSM based on BDF / path to device (to remain
> stable across reenumeration) and config space offset to allow the OS
> to say 'Hi other entity / firmware are you using this DOE instance?"
> Kind of an OSC with parameters. Also includes the other way around that
> the question tells the firmware that if it says "no you can't" the OS
> will leave it alone until a reboot or similar - that potentially avoids
> the problem that we access DOE instances already without taking care
> about this

PCI Firmware Spec r3.3 table 4-7 lists a number of _DSM Definitions for
PCI. Indeed that could be another solution. E.g. a newly defined _DSM
might return the offset in config space of DOE instance(s) which the OS
is not permitted to use.


> (I dropped ball on this having raised it way back near start
> of us adding DOE support.)

Not your fault. I think the industry got a bit ahead of itself in
its "confidential computing" frenzy and forgot to specify these very
basic things.


> If we do want to do any of these, which spec is appropriate? Link it to PCI
> and propose a PCI firmware spec update? (not sure they have a code
> first process available) or make it somewhat generic and propose an
> ACPI Code first change?

PCI Firmware Spec would seem to be appropriate. However this can't
be solved by the kernel community. We need to talk to our confidential
computing architects and our representatives at the PCISIG to get the
spec amended.

Thanks,

Lukas

2023-10-09 14:03:02

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 12/12] PCI/CMA: Grant guests exclusive control of authentication

On Mon, Oct 09, 2023 at 09:52:00PM +1100, Alexey Kardashevskiy wrote:
> On 29/9/23 03:32, Lukas Wunner wrote:
> > At any given time, only a single entity in a physical system may have
> > an SPDM connection to a device. That's because the GET_VERSION request
> > (which begins an authentication sequence) resets "the connection and all
> > context associated with that connection" (SPDM 1.3.0 margin no 158).
> >
> > Thus, when a device is passed through to a guest and the guest has
> > authenticated it, a subsequent authentication by the host would reset
> > the device's CMA-SPDM session behind the guest's back.
> >
> > Prevent by letting the guest claim exclusive CMA ownership of the device
> > during passthrough. Refuse CMA reauthentication on the host as long.
> > After passthrough has concluded, reauthenticate the device on the host.
[...]
> > --- a/drivers/pci/pci.h
> > +++ b/drivers/pci/pci.h
> > @@ -388,6 +388,7 @@ static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
> > #define PCI_DEV_ADDED 0
> > #define PCI_DPC_RECOVERED 1
> > #define PCI_DPC_RECOVERING 2
> > +#define PCI_CMA_OWNED_BY_GUEST 3
>
> In AMD SEV TIO, the PSP firmware creates an SPDM connection. What is the
> expected way of managing such ownership, a new priv_flags bit + api for it?

Right, I understand. See this ongoing discussion in reply to the
cover letter:

https://lore.kernel.org/all/[email protected]/

In short, we need a spec amendment to negotiate between platform and
OS which of the two controls the DOE instance supporting CMA-SPDM.

I think the OS is free to access any Extended Capabilities in
Config Space unless the platform doesn't grant it control over
them through _OSC. Because the _OSC definition in the PCI
Firmware Spec was not amended for CMA-SPDM, it is legal for the
OS to assume control of CMA-SPDM, which is what this patch does.

Thanks,

Lukas

2023-10-10 04:08:26

by Alexey Kardashevskiy

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On 10/10/23 00:49, Lukas Wunner wrote:
> On Mon, Oct 09, 2023 at 12:33:35PM +0100, Jonathan Cameron wrote:
>> On Sat, 7 Oct 2023 12:04:33 +0200 Lukas Wunner <[email protected]> wrote:
>>> On Fri, Oct 06, 2023 at 09:06:13AM -0700, Dan Williams wrote:
>>>> Linux also has an interest in accommodating opt-in to using platform
>>>> managed keys, so the design requires that key management and session
>>>> ownership is a system owner policy choice.
>>>
>>> You're pointing out a gap in the specification:
>>>
>>> There's an existing mechanism to negotiate which PCI features are
>>> handled natively by the OS and which by platform firmware and that's
>>> the _OSC Control Field (PCI Firmware Spec r3.3 table 4-5 and 4-6).
>>>
>>> There are currently 10 features whose ownership is negotiated with _OSC,
>>> examples are Hotplug control and DPC configuration control.
>>>
>>> I propose adding an 11th bit to negotiate ownership of the CMA-SPDM
>>> session.
>>>
>>> Once that's added to the PCI Firmware Spec, amending the implementation
>>> to honor it is trivial: Just check for platform ownership at the top
>>> of pci_cma_init() and return.
>>
>> This might want to be a control over the specific DOE instance instead
>> of a general purpose CMA control (or maybe we want both).
>>
>> There is no safe way to access a DOE to find out if it supports CMA
>> that doesn't potentially break another entity using the mailbox.
>> Given the DOE instances might be for something entirely different we
>> can't just decide not to use them at all based on a global control.
>
> Per PCIe r6.1 sec 6.31.3, the DOE instance used for CMA-SPDM must support
> "no other data object protocol(s)" besides DOE discovery, CMA-SPDM and
> Secured CMA-SPDM.
>
> So if the platform doesn't grant the OS control over that DOE instance,
> unrelated DOE instances and protocols (such as CDAT retrieval) are not
> affected.
>
> E.g. PCI Firmware Spec r3.3 table 4-5 could be amended with something
> along the lines of:
>
> Control Field Bit Offset: 11
>
> Interpretation: PCI Express Component Measurement and Authentication control
>
> The operating system sets this bit to 1 to request control over the
> DOE instance supporting the CMA-SPDM feature.
>
> You're right that to discover the DOE instance for CMA-SPDM in the
> first place, it needs to be accessed, which might interfere with the
> firmware using it. Perhaps this can be solved with the DOE Busy bit.
>
>
>> Any such control becomes messy when hotplug is taken into account.
>> I suppose we could do a _DSM based on BDF / path to device (to remain
>> stable across reenumeration) and config space offset to allow the OS
>> to say 'Hi other entity / firmware are you using this DOE instance?"
>> Kind of an OSC with parameters. Also includes the other way around that
>> the question tells the firmware that if it says "no you can't" the OS
>> will leave it alone until a reboot or similar - that potentially avoids
>> the problem that we access DOE instances already without taking care
>> about this
>
> PCI Firmware Spec r3.3 table 4-7 lists a number of _DSM Definitions for
> PCI. Indeed that could be another solution. E.g. a newly defined _DSM
> might return the offset in config space of DOE instance(s) which the OS
> is not permitted to use.
>
>
>> (I dropped ball on this having raised it way back near start
>> of us adding DOE support.)
>
> Not your fault. I think the industry got a bit ahead of itself in
> its "confidential computing" frenzy and forgot to specify these very
> basic things.
>
>
>> If we do want to do any of these, which spec is appropriate? Link it to PCI
>> and propose a PCI firmware spec update? (not sure they have a code
>> first process available) or make it somewhat generic and propose an
>> ACPI Code first change?
>
> PCI Firmware Spec would seem to be appropriate. However this can't
> be solved by the kernel community.

How so? It is up to the user to decide whether it is SPDM/CMA in the
kernel or the firmware + coco, both are quite possible (it is IDE
which is not possible without the firmware on AMD but we are not there yet).

But the way SPDM is done now is that if the user (as myself) wants to
let the firmware run SPDM - the only choice is disabling CONFIG_CMA
completely as CMA is not a (un)loadable module or built-in (with some
"blacklist" parameters), and does not provide a sysfs knob to control
its tentacles. Kinda harsh.

Note, this PSP firmware is not BIOS (which runs on the same core and has
same access to PCI as the host OS), it is a separate platform processor
which only programs IDE keys to the PCI RC (via some some internal bus
mechanism) but does not do anything on the bus itself and relies on the
host OS proxying DOE, and there is no APCI between the core and the psp.


> We need to talk to our confidential
> computing architects and our representatives at the PCISIG to get the
> spec amended.
>
> Thanks,
>
> Lukas

--
Alexey

2023-10-10 08:19:20

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On Tue, Oct 10, 2023 at 03:07:41PM +1100, Alexey Kardashevskiy wrote:
> On 10/10/23 00:49, Lukas Wunner wrote:
> > PCI Firmware Spec would seem to be appropriate. However this can't
> > be solved by the kernel community.
>
> How so? It is up to the user to decide whether it is SPDM/CMA in the kernel
> or the firmware + coco, both are quite possible (it is IDE which is not
> possible without the firmware on AMD but we are not there yet).

The user can control ownership of CMA-SPDM e.g. through a BIOS knob.
And that BIOS knob then influences the outcome of the _OSC negotiation
between platform and OS.


> But the way SPDM is done now is that if the user (as myself) wants to let
> the firmware run SPDM - the only choice is disabling CONFIG_CMA completely
> as CMA is not a (un)loadable module or built-in (with some "blacklist"
> parameters), and does not provide a sysfs knob to control its tentacles.

The problem is every single vendor thinks they can come up with
their own idea of who owns the SPDM session:

I've looked at the Nvidia driver and they've hacked libspdm into it,
so their idea is that the device driver owns the SPDM session.

AMD wants the host to proxy DOE but not own the SPDM session.

We have *standards* for a reason. So that products are interoperable.

If the kernel tries to accommodate to every vendor's idea of SPDM ownership
we'll end up with an unmaintainable mess of quirks, plus sysfs knobs
which were once intended as a stopgap but can never be removed because
they're userspace ABI.

This needs to be solved in the *specification*.

And the existing solution for who owns a particular PCI feature is _OSC.
Hence this needs to be taken up with the Firmware Working Group at the
PCISIG.


> Note, this PSP firmware is not BIOS (which runs on the same core and has
> same access to PCI as the host OS), it is a separate platform processor
> which only programs IDE keys to the PCI RC (via some some internal bus
> mechanism) but does not do anything on the bus itself and relies on the host
> OS proxying DOE, and there is no APCI between the core and the psp.

Somewhat tangentially, would it be possible in your architecture
that the host or guest asks PSP to program IDE keys into the Root Port?
Or alternatively, access the key registers directly without PSP involvement?

Thanks,

Lukas

2023-10-10 12:53:49

by Alexey Kardashevskiy

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication


On 10/10/23 19:19, Lukas Wunner wrote:
> On Tue, Oct 10, 2023 at 03:07:41PM +1100, Alexey Kardashevskiy wrote:
>> On 10/10/23 00:49, Lukas Wunner wrote:
>>> PCI Firmware Spec would seem to be appropriate. However this can't
>>> be solved by the kernel community.
>>
>> How so? It is up to the user to decide whether it is SPDM/CMA in the kernel
>> or the firmware + coco, both are quite possible (it is IDE which is not
>> possible without the firmware on AMD but we are not there yet).
>
> The user can control ownership of CMA-SPDM e.g. through a BIOS knob.
> And that BIOS knob then influences the outcome of the _OSC negotiation
> between platform and OS.
>
>
>> But the way SPDM is done now is that if the user (as myself) wants to let
>> the firmware run SPDM - the only choice is disabling CONFIG_CMA completely
>> as CMA is not a (un)loadable module or built-in (with some "blacklist"
>> parameters), and does not provide a sysfs knob to control its tentacles.
>
> The problem is every single vendor thinks they can come up with
> their own idea of who owns the SPDM session:
>
> I've looked at the Nvidia driver and they've hacked libspdm into it,
> so their idea is that the device driver owns the SPDM session.
>
> AMD wants the host to proxy DOE but not own the SPDM session.
>
> We have *standards* for a reason. So that products are interoperable.

There is no "standard PCI ethernet device", somehow we survive ;)

> If the kernel tries to accommodate to every vendor's idea of SPDM ownership
> we'll end up with an unmaintainable mess of quirks, plus sysfs knobs
> which were once intended as a stopgap but can never be removed because
> they're userspace ABI.

The host kernel needs to accommodate the idea that it is not trusted,
and neither is the BIOS.

> This needs to be solved in the *specification*.
>
> And the existing solution for who owns a particular PCI feature is _OSC.
> Hence this needs to be taken up with the Firmware Working Group at the
> PCISIG.

I do like the general idea of specifying things, etc but this place does
not sound right. The firmware you are talking about has full access to
PCI, the PSP firmware does not have any (besides the IDE keys
programming), is there any example of such firmware in the PCI Firmware
spec? From the BIOS standpoint, the host OS owns DOE and whatever is
sent over it (on AMD SEV TIO). The host OS chooses not to compose these
SPDM packets itself (while it could) in order to be able to run guests
without having them to trust the host OS.

>> Note, this PSP firmware is not BIOS (which runs on the same core and has
>> same access to PCI as the host OS), it is a separate platform processor
>> which only programs IDE keys to the PCI RC (via some some internal bus
>> mechanism) but does not do anything on the bus itself and relies on the host
>> OS proxying DOE, and there is no APCI between the core and the psp.
>
> Somewhat tangentially, would it be possible in your architecture
> that the host or guest asks PSP to program IDE keys into the Root Port?

Sure it is possible to implement. But this does not help our primary use
case which is confidential VMs where the host OS is not trusted with the
keys.

> Or alternatively, access the key registers directly without PSP involvement?

No afaik, for the reason above.


>
> Thanks,
>
> Lukas

--
Alexey


2023-10-11 16:43:25

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On Tue, 10 Oct 2023 15:07:41 +1100
Alexey Kardashevskiy <[email protected]> wrote:

> On 10/10/23 00:49, Lukas Wunner wrote:
> > On Mon, Oct 09, 2023 at 12:33:35PM +0100, Jonathan Cameron wrote:
> >> On Sat, 7 Oct 2023 12:04:33 +0200 Lukas Wunner <[email protected]> wrote:
> >>> On Fri, Oct 06, 2023 at 09:06:13AM -0700, Dan Williams wrote:
> >>>> Linux also has an interest in accommodating opt-in to using platform
> >>>> managed keys, so the design requires that key management and session
> >>>> ownership is a system owner policy choice.
> >>>
> >>> You're pointing out a gap in the specification:
> >>>
> >>> There's an existing mechanism to negotiate which PCI features are
> >>> handled natively by the OS and which by platform firmware and that's
> >>> the _OSC Control Field (PCI Firmware Spec r3.3 table 4-5 and 4-6).
> >>>
> >>> There are currently 10 features whose ownership is negotiated with _OSC,
> >>> examples are Hotplug control and DPC configuration control.
> >>>
> >>> I propose adding an 11th bit to negotiate ownership of the CMA-SPDM
> >>> session.
> >>>
> >>> Once that's added to the PCI Firmware Spec, amending the implementation
> >>> to honor it is trivial: Just check for platform ownership at the top
> >>> of pci_cma_init() and return.
> >>
> >> This might want to be a control over the specific DOE instance instead
> >> of a general purpose CMA control (or maybe we want both).
> >>
> >> There is no safe way to access a DOE to find out if it supports CMA
> >> that doesn't potentially break another entity using the mailbox.
> >> Given the DOE instances might be for something entirely different we
> >> can't just decide not to use them at all based on a global control.
> >
> > Per PCIe r6.1 sec 6.31.3, the DOE instance used for CMA-SPDM must support
> > "no other data object protocol(s)" besides DOE discovery, CMA-SPDM and
> > Secured CMA-SPDM.
> >
> > So if the platform doesn't grant the OS control over that DOE instance,
> > unrelated DOE instances and protocols (such as CDAT retrieval) are not
> > affected.
> >
> > E.g. PCI Firmware Spec r3.3 table 4-5 could be amended with something
> > along the lines of:
> >
> > Control Field Bit Offset: 11
> >
> > Interpretation: PCI Express Component Measurement and Authentication control
> >
> > The operating system sets this bit to 1 to request control over the
> > DOE instance supporting the CMA-SPDM feature.
> >
> > You're right that to discover the DOE instance for CMA-SPDM in the
> > first place, it needs to be accessed, which might interfere with the
> > firmware using it. Perhaps this can be solved with the DOE Busy bit.
> >
> >
> >> Any such control becomes messy when hotplug is taken into account.
> >> I suppose we could do a _DSM based on BDF / path to device (to remain
> >> stable across reenumeration) and config space offset to allow the OS
> >> to say 'Hi other entity / firmware are you using this DOE instance?"
> >> Kind of an OSC with parameters. Also includes the other way around that
> >> the question tells the firmware that if it says "no you can't" the OS
> >> will leave it alone until a reboot or similar - that potentially avoids
> >> the problem that we access DOE instances already without taking care
> >> about this
> >
> > PCI Firmware Spec r3.3 table 4-7 lists a number of _DSM Definitions for
> > PCI. Indeed that could be another solution. E.g. a newly defined _DSM
> > might return the offset in config space of DOE instance(s) which the OS
> > is not permitted to use.
> >
> >
> >> (I dropped ball on this having raised it way back near start
> >> of us adding DOE support.)
> >
> > Not your fault. I think the industry got a bit ahead of itself in
> > its "confidential computing" frenzy and forgot to specify these very
> > basic things.
> >
> >
> >> If we do want to do any of these, which spec is appropriate? Link it to PCI
> >> and propose a PCI firmware spec update? (not sure they have a code
> >> first process available) or make it somewhat generic and propose an
> >> ACPI Code first change?
> >
> > PCI Firmware Spec would seem to be appropriate. However this can't
> > be solved by the kernel community.
>
> How so? It is up to the user to decide whether it is SPDM/CMA in the
> kernel or the firmware + coco, both are quite possible (it is IDE
> which is not possible without the firmware on AMD but we are not there yet).
>
> But the way SPDM is done now is that if the user (as myself) wants to
> let the firmware run SPDM - the only choice is disabling CONFIG_CMA
> completely as CMA is not a (un)loadable module or built-in (with some
> "blacklist" parameters), and does not provide a sysfs knob to control
> its tentacles. Kinda harsh.

Not necessarily sufficient unfortunately - if you have a CXL type3 device,
we will run the discovery protocol on the DOE to find out what it supports
(looking for table access protocol used for CDAT). If that hits at wrong point it
will likely break your CMA usage unless you have some hardware lockout of
the relevant PCI config space (in which case that will work with CONFIG_CMA
enabled).

Now you might not care about CXL type 3 devices today, but pretty sure someone
will at somepoint. Or one of the other uses of DOEs will be relevant.
You might be fine assuming only drivers you've bound ever access the devices
config space, but much nicer to have something standard to ensure that if
we can (and driver specific stuff will deal with it in the short term).

Jonathan

>
> Note, this PSP firmware is not BIOS (which runs on the same core and has
> same access to PCI as the host OS), it is a separate platform processor
> which only programs IDE keys to the PCI RC (via some some internal bus
> mechanism) but does not do anything on the bus itself and relies on the
> host OS proxying DOE, and there is no APCI between the core and the psp.
>
>
> > We need to talk to our confidential
> > computing architects and our representatives at the PCISIG to get the
> > spec amended.
> >
> > Thanks,
> >
> > Lukas
>

2023-10-11 16:58:03

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On Tue, 10 Oct 2023 23:53:16 +1100
Alexey Kardashevskiy <[email protected]> wrote:

> On 10/10/23 19:19, Lukas Wunner wrote:
> > On Tue, Oct 10, 2023 at 03:07:41PM +1100, Alexey Kardashevskiy wrote:
> >> On 10/10/23 00:49, Lukas Wunner wrote:
> >>> PCI Firmware Spec would seem to be appropriate. However this can't
> >>> be solved by the kernel community.
> >>
> >> How so? It is up to the user to decide whether it is SPDM/CMA in the kernel
> >> or the firmware + coco, both are quite possible (it is IDE which is not
> >> possible without the firmware on AMD but we are not there yet).
> >
> > The user can control ownership of CMA-SPDM e.g. through a BIOS knob.
> > And that BIOS knob then influences the outcome of the _OSC negotiation
> > between platform and OS.
> >
> >
> >> But the way SPDM is done now is that if the user (as myself) wants to let
> >> the firmware run SPDM - the only choice is disabling CONFIG_CMA completely
> >> as CMA is not a (un)loadable module or built-in (with some "blacklist"
> >> parameters), and does not provide a sysfs knob to control its tentacles.
> >
> > The problem is every single vendor thinks they can come up with
> > their own idea of who owns the SPDM session:
> >
> > I've looked at the Nvidia driver and they've hacked libspdm into it,
> > so their idea is that the device driver owns the SPDM session.
> >
> > AMD wants the host to proxy DOE but not own the SPDM session.
> >
> > We have *standards* for a reason. So that products are interoperable.
>
> There is no "standard PCI ethernet device", somehow we survive ;)
>
> > If the kernel tries to accommodate to every vendor's idea of SPDM ownership
> > we'll end up with an unmaintainable mess of quirks, plus sysfs knobs
> > which were once intended as a stopgap but can never be removed because
> > they're userspace ABI.
>
> The host kernel needs to accommodate the idea that it is not trusted,
> and neither is the BIOS.
>
> > This needs to be solved in the *specification*.
> >
> > And the existing solution for who owns a particular PCI feature is _OSC.
> > Hence this needs to be taken up with the Firmware Working Group at the
> > PCISIG.
>
> I do like the general idea of specifying things, etc but this place does
> not sound right. The firmware you are talking about has full access to
> PCI, the PSP firmware does not have any (besides the IDE keys
> programming), is there any example of such firmware in the PCI Firmware
> spec? From the BIOS standpoint, the host OS owns DOE and whatever is
> sent over it (on AMD SEV TIO). The host OS chooses not to compose these
> SPDM packets itself (while it could) in order to be able to run guests
> without having them to trust the host OS.

As a minimum I'd like to see something saying - "keep away from discovery
protocol on this DOE instance". An ACPI _OSC or _DSM or similar could do that.
It won't be needed for every approach, but it might for some.

Then either firmwware knows what to do, or a specific driver does.

If your proxy comes up late enough that we've already done (and cached) discovery
protocols results then this might not be a problem for this particular
approach as we have no reason to rerun discovery (other than hotplug in which
case there is lots of other stuff to do anyway).

For your case we need some hooks for the PSP to be able to drive the SPDM session
but that should be easy to allow. I don't think precludes the hypervisor also
verifying the hardware is trusted by it along the way (though not used for IDE).
So if you are relying on a host OS proxy I don't thing you need to disable CONFIG_CMA
(maybe something around resets?)

Potentially the host OS tries first (maybe succeeds - that doesn't matter though
nothing wrong with defense in depth) and then the PSP via a proxy does it all over
again which is fine. All we need to do is guarantee ordering and I think we are
fine for that.

Far too many possible models here but such is life I guess.

>
> >> Note, this PSP firmware is not BIOS (which runs on the same core and has
> >> same access to PCI as the host OS), it is a separate platform processor
> >> which only programs IDE keys to the PCI RC (via some some internal bus
> >> mechanism) but does not do anything on the bus itself and relies on the host
> >> OS proxying DOE, and there is no APCI between the core and the psp.
> >
> > Somewhat tangentially, would it be possible in your architecture
> > that the host or guest asks PSP to program IDE keys into the Root Port?
>
> Sure it is possible to implement. But this does not help our primary use
> case which is confidential VMs where the host OS is not trusted with the
> keys.
>
> > Or alternatively, access the key registers directly without PSP involvement?
>
> No afaik, for the reason above.
>
>
> >
> > Thanks,
> >
> > Lukas
>

2023-10-12 02:22:03

by Alistair Francis

[permalink] [raw]
Subject: Re: [PATCH 04/12] certs: Create blacklist keyring earlier

On Thu, 2023-09-28 at 19:32 +0200, Lukas Wunner wrote:
> The upcoming support for PCI device authentication with CMA-SPDM
> (PCIe r6.1 sec 6.31) requires parsing X.509 certificates upon
> device enumeration, which happens in a subsys_initcall().
>
> Parsing X.509 certificates accesses the blacklist keyring:
> x509_cert_parse()
>   x509_get_sig_params()
>     is_hash_blacklisted()
>       keyring_search()
>
> So far the keyring is created much later in a device_initcall(). 
> Avoid
> a NULL pointer dereference on access to the keyring by creating it
> one
> initcall level earlier than PCI device enumeration, i.e. in an
> arch_initcall().
>
> Signed-off-by: Lukas Wunner <[email protected]>

Reviewed-by: Alistair Francis <[email protected]>

Alistair

> ---
>  certs/blacklist.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/certs/blacklist.c b/certs/blacklist.c
> index 675dd7a8f07a..34185415d451 100644
> --- a/certs/blacklist.c
> +++ b/certs/blacklist.c
> @@ -311,7 +311,7 @@ static int restrict_link_for_blacklist(struct key
> *dest_keyring,
>   * Initialise the blacklist
>   *
>   * The blacklist_init() function is registered as an initcall via
> - * device_initcall().  As a result if the blacklist_init() function
> fails for
> + * arch_initcall().  As a result if the blacklist_init() function
> fails for
>   * any reason the kernel continues to execute.  While cleanly
> returning -ENODEV
>   * could be acceptable for some non-critical kernel parts, if the
> blacklist
>   * keyring fails to load it defeats the certificate/key based deny
> list for
> @@ -356,7 +356,7 @@ static int __init blacklist_init(void)
>  /*
>   * Must be initialised before we try and load the keys into the
> keyring.
>   */
> -device_initcall(blacklist_init);
> +arch_initcall(blacklist_init);
>  
>  #ifdef CONFIG_SYSTEM_REVOCATION_LIST
>  /*

2023-10-12 03:00:46

by Alexey Kardashevskiy

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication


On 12/10/23 03:57, Jonathan Cameron wrote:
> On Tue, 10 Oct 2023 23:53:16 +1100
> Alexey Kardashevskiy <[email protected]> wrote:
>
>> On 10/10/23 19:19, Lukas Wunner wrote:
>>> On Tue, Oct 10, 2023 at 03:07:41PM +1100, Alexey Kardashevskiy wrote:
>>>> On 10/10/23 00:49, Lukas Wunner wrote:
>>>>> PCI Firmware Spec would seem to be appropriate. However this can't
>>>>> be solved by the kernel community.
>>>>
>>>> How so? It is up to the user to decide whether it is SPDM/CMA in the kernel
>>>> or the firmware + coco, both are quite possible (it is IDE which is not
>>>> possible without the firmware on AMD but we are not there yet).
>>>
>>> The user can control ownership of CMA-SPDM e.g. through a BIOS knob.
>>> And that BIOS knob then influences the outcome of the _OSC negotiation
>>> between platform and OS.
>>>
>>>
>>>> But the way SPDM is done now is that if the user (as myself) wants to let
>>>> the firmware run SPDM - the only choice is disabling CONFIG_CMA completely
>>>> as CMA is not a (un)loadable module or built-in (with some "blacklist"
>>>> parameters), and does not provide a sysfs knob to control its tentacles.
>>>
>>> The problem is every single vendor thinks they can come up with
>>> their own idea of who owns the SPDM session:
>>>
>>> I've looked at the Nvidia driver and they've hacked libspdm into it,
>>> so their idea is that the device driver owns the SPDM session.
>> >
>>> AMD wants the host to proxy DOE but not own the SPDM session.
>> >
>>> We have *standards* for a reason. So that products are interoperable.
>>
>> There is no "standard PCI ethernet device", somehow we survive ;)
>>
>>> If the kernel tries to accommodate to every vendor's idea of SPDM ownership
>>> we'll end up with an unmaintainable mess of quirks, plus sysfs knobs
>>> which were once intended as a stopgap but can never be removed because
>>> they're userspace ABI.
>>
>> The host kernel needs to accommodate the idea that it is not trusted,
>> and neither is the BIOS.
>>
>>> This needs to be solved in the *specification*.
>> >
>>> And the existing solution for who owns a particular PCI feature is _OSC.
>>> Hence this needs to be taken up with the Firmware Working Group at the
>>> PCISIG.
>>
>> I do like the general idea of specifying things, etc but this place does
>> not sound right. The firmware you are talking about has full access to
>> PCI, the PSP firmware does not have any (besides the IDE keys
>> programming), is there any example of such firmware in the PCI Firmware
>> spec? From the BIOS standpoint, the host OS owns DOE and whatever is
>> sent over it (on AMD SEV TIO). The host OS chooses not to compose these
>> SPDM packets itself (while it could) in order to be able to run guests
>> without having them to trust the host OS.
>
> As a minimum I'd like to see something saying - "keep away from discovery
> protocol on this DOE instance". An ACPI _OSC or _DSM or similar could do that.
> It won't be needed for every approach, but it might for some.

I am relying on the existing DOE code to do the discovery. No APCI in
the SEV TIO picture.

> Then either firmwware knows what to do, or a specific driver does.
>
> If your proxy comes up late enough that we've already done (and cached) discovery
> protocols results then this might not be a problem for this particular
> approach as we have no reason to rerun discovery (other than hotplug in which
> case there is lots of other stuff to do anyway).
>
> For your case we need some hooks for the PSP to be able to drive the SPDM session
> but that should be easy to allow.

This is just a couple of calls:
doe_md = pci_find_doe_mailbox(pdev, PCI_VENDOR_ID_PCI_SIG,
PCI_DOE_PROTOCOL_SECURED_CMA_SPDM);
and
pci_doe(doe_mb, PCI_VENDOR_ID_PCI_SIG,
PCI_DOE_PROTOCOL_SECURED_CMA_SPDM, ...)


> I don't think precludes the hypervisor also
> verifying the hardware is trusted by it along the way (though not used for IDE).
> So if you are relying on a host OS proxy I don't thing you need to disable CONFIG_CMA
> (maybe something around resets?)

If I do the above 2 calls, then pdev->spdm_state will be out of sync.

> Potentially the host OS tries first (maybe succeeds - that doesn't matter though
> nothing wrong with defense in depth) and then the PSP via a proxy does it all over
> again which is fine. All we need to do is guarantee ordering and I think we are
> fine for that.

Only trusted bits go all over again, untrusted stuff such as discovery
is still done by the host OS and PSP is not rerunning it.


> Far too many possible models here but such is life I guess.

True. When I joined the x86 world (quite recently), I was surprised how
different AMD and Intel are in everything besides the userspace :)


>>>> Note, this PSP firmware is not BIOS (which runs on the same core and has
>>>> same access to PCI as the host OS), it is a separate platform processor
>>>> which only programs IDE keys to the PCI RC (via some some internal bus
>>>> mechanism) but does not do anything on the bus itself and relies on the host
>>>> OS proxying DOE, and there is no APCI between the core and the psp.
>>>
>>> Somewhat tangentially, would it be possible in your architecture
>>> that the host or guest asks PSP to program IDE keys into the Root Port?
>>
>> Sure it is possible to implement. But this does not help our primary use
>> case which is confidential VMs where the host OS is not trusted with the
>> keys.
>>
>>> Or alternatively, access the key registers directly without PSP involvement?
>>
>> No afaik, for the reason above.


--
Alexey


2023-10-12 03:27:06

by Alistair Francis

[permalink] [raw]
Subject: Re: [PATCH 07/12] spdm: Introduce library to authenticate devices

On Tue, 2023-10-03 at 15:39 +0100, Jonathan Cameron wrote:
> On Thu, 28 Sep 2023 19:32:37 +0200
> Lukas Wunner <[email protected]> wrote:
>
> > From: Jonathan Cameron <[email protected]>
> >
> > The Security Protocol and Data Model (SPDM) allows for
> > authentication,
> > measurement, key exchange and encrypted sessions with devices.
> >
> > A commonly used term for authentication and measurement is
> > attestation.
> >
> > SPDM was conceived by the Distributed Management Task Force (DMTF).
> > Its specification defines a request/response protocol spoken
> > between
> > host and attached devices over a variety of transports:
> >
> >   https://www.dmtf.org/dsp/DSP0274
> >
> > This implementation supports SPDM 1.0 through 1.3 (the latest
> > version).
>
> I've no strong objection in allowing 1.0, but I think we do need
> to control min version accepted somehow as I'm not that keen to get
> security folk analyzing old version...

Agreed. I'm not sure we even need to support 1.0

>
> > It is designed to be transport-agnostic as the kernel already
> > supports
> > two different SPDM-capable transports:
> >
> > * PCIe Data Object Exchange (PCIe r6.1 sec 6.30, drivers/pci/doe.c)
> > * Management Component Transport Protocol (MCTP,
> >   Documentation/networking/mctp.rst)
>
> The MCTP side of things is going to be interesting because mostly you
> need to jump through a bunch of hoops (address assignment, routing
> setup
> etc) before you can actually talk to a device.   That all involves
> a userspace agent.  So I'm not 100% sure how this will all turn out.
> However still makes sense to have a transport agnostic implementation
> as if nothing else it makes it easier to review as keeps us within
> one specification.

This list will probably expand in the future though

> >
> > Use cases for SPDM include, but are not limited to:
> >
> > * PCIe Component Measurement and Authentication (PCIe r6.1 sec
> > 6.31)
> > * Compute Express Link (CXL r3.0 sec 14.11.6)
> > * Open Compute Project (Attestation of System Components r1.0)
> >  
> > https://www.opencompute.org/documents/attestation-v1-0-20201104-pdf
>
> Alastair, would it make sense to also call out some of the storage
> use cases you are interested in?

I don't really have anything to add at the moment. I think PCIe CMA
covers the current DOE work

Alistair

2023-10-12 04:37:48

by Damien Le Moal

[permalink] [raw]
Subject: Re: [PATCH 07/12] spdm: Introduce library to authenticate devices

On 10/12/23 12:26, Alistair Francis wrote:
> On Tue, 2023-10-03 at 15:39 +0100, Jonathan Cameron wrote:
>> On Thu, 28 Sep 2023 19:32:37 +0200
>> Lukas Wunner <[email protected]> wrote:
>>
>>> From: Jonathan Cameron <[email protected]>
>>>
>>> The Security Protocol and Data Model (SPDM) allows for
>>> authentication,
>>> measurement, key exchange and encrypted sessions with devices.
>>>
>>> A commonly used term for authentication and measurement is
>>> attestation.
>>>
>>> SPDM was conceived by the Distributed Management Task Force (DMTF).
>>> Its specification defines a request/response protocol spoken
>>> between
>>> host and attached devices over a variety of transports:
>>>
>>>   https://www.dmtf.org/dsp/DSP0274
>>>
>>> This implementation supports SPDM 1.0 through 1.3 (the latest
>>> version).
>>
>> I've no strong objection in allowing 1.0, but I think we do need
>> to control min version accepted somehow as I'm not that keen to get
>> security folk analyzing old version...
>
> Agreed. I'm not sure we even need to support 1.0
>
>>
>>> It is designed to be transport-agnostic as the kernel already
>>> supports
>>> two different SPDM-capable transports:
>>>
>>> * PCIe Data Object Exchange (PCIe r6.1 sec 6.30, drivers/pci/doe.c)
>>> * Management Component Transport Protocol (MCTP,
>>>   Documentation/networking/mctp.rst)
>>
>> The MCTP side of things is going to be interesting because mostly you
>> need to jump through a bunch of hoops (address assignment, routing
>> setup
>> etc) before you can actually talk to a device.   That all involves
>> a userspace agent.  So I'm not 100% sure how this will all turn out.
>> However still makes sense to have a transport agnostic implementation
>> as if nothing else it makes it easier to review as keeps us within
>> one specification.
>
> This list will probably expand in the future though
>
>>>
>>> Use cases for SPDM include, but are not limited to:
>>>
>>> * PCIe Component Measurement and Authentication (PCIe r6.1 sec
>>> 6.31)
>>> * Compute Express Link (CXL r3.0 sec 14.11.6)
>>> * Open Compute Project (Attestation of System Components r1.0)
>>>  
>>> https://www.opencompute.org/documents/attestation-v1-0-20201104-pdf
>>
>> Alastair, would it make sense to also call out some of the storage
>> use cases you are interested in?
>
> I don't really have anything to add at the moment. I think PCIe CMA
> covers the current DOE work

Specifications for SPDM encapsulation in SCSI and ATA commands (SECURITY
PROTOCOL IN/OUT and TRUSTED SNED/RECEIVE) is being worked on now but that is
still in early phases of definition. So that support can come later. I suspect
the API may need some modification to accommodate that use case, but we need
more complete specification first to clearly see what is needed (if anything at
all).


--
Damien Le Moal
Western Digital Research

2023-10-12 07:16:49

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 07/12] spdm: Introduce library to authenticate devices

On Thu, Oct 12, 2023 at 03:26:44AM +0000, Alistair Francis wrote:
> On Tue, 2023-10-03 at 15:39 +0100, Jonathan Cameron wrote:
> > On Thu, 28 Sep 2023 19:32:37 +0200 Lukas Wunner <[email protected]> wrote:
> > > This implementation supports SPDM 1.0 through 1.3 (the latest
> > > version).
> >
> > I've no strong objection in allowing 1.0, but I think we do need
> > to control min version accepted somehow as I'm not that keen to get
> > security folk analyzing old version...
>
> Agreed. I'm not sure we even need to support 1.0

According to PCIe r6.1 page 115 ("Reference Documents"):

"CMA requires SPDM Version 1.0 or above. IDE requires SPDM Version 1.1
or above. TDISP requires version 1.2 or above."

This could be interpreted as SPDM 1.0 support being mandatory to be
spec-compliant. Even if we drop support for 1.0 from the initial
bringup patches, someone could later come along and propose a patch
to re-add it on the grounds of the above-quoted spec section.
So I think we can't avoid it.

Thanks,

Lukas

2023-10-12 09:16:50

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On Tue, Oct 10, 2023 at 03:07:41PM +1100, Alexey Kardashevskiy wrote:
> But the way SPDM is done now is that if the user (as myself) wants to let
> the firmware run SPDM - the only choice is disabling CONFIG_CMA completely
> as CMA is not a (un)loadable module or built-in (with some "blacklist"
> parameters), and does not provide a sysfs knob to control its tentacles.
> Kinda harsh.

On AMD SEV-TIO, does the PSP perform SPDM exchanges with a device
*before* it is passed through to a guest? If so, why does it do that?

Dan and I discussed this off-list and Dan is arguing for lazy attestation,
i.e. the TSM should only have the need to perform SPDM exchanges with
the device when it is passed through.

So the host enumerates the DOE protocols and authenticates the device.
When the device is passed through, patch 12/12 ensures that the host
keeps its hands off of the device, thus affording the TSM exclusive
SPDM control.

I agree that the commit message of 12/12 is utterly misleading in that
it says "the guest" is granted exclusive control. It should say "the TSM"
instead. (There might be implementations where the guest itself has
the role of the TSM and authenticates the device on its own behalf,
but PCIe r6.1 sec 11 uses the term "TSM" so that's what the commit
message needs to use.)

However apart from the necessary rewrite of the commit message and
perhaps a rename of the PCI_CMA_OWNED_BY_GUEST flag, I think patch 12/12
should already be doing exactly what you need -- provided that the
PSP doesn't perform SPDM exchanges before passthrough. If it already
performs them, say, on boot, I'd like to understand the reason.

Thanks,

Lukas

2023-10-12 11:19:08

by Alexey Kardashevskiy

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication


On 12/10/23 20:15, Lukas Wunner wrote:
> On Tue, Oct 10, 2023 at 03:07:41PM +1100, Alexey Kardashevskiy wrote:
>> But the way SPDM is done now is that if the user (as myself) wants to let
>> the firmware run SPDM - the only choice is disabling CONFIG_CMA completely
>> as CMA is not a (un)loadable module or built-in (with some "blacklist"
>> parameters), and does not provide a sysfs knob to control its tentacles.
>> Kinda harsh.
>
> On AMD SEV-TIO, does the PSP perform SPDM exchanges with a device
> *before* it is passed through to a guest? If so, why does it do that?

Yes, to set up IDE. SEV TIO is designed in a way that there is one
stream == set of keys per the PF's traffic class.

It is like this - imagine a TDISP+SRIOV device with hundreds VFs passed
through to hundreds VMs. The host still owns the PF, provides DOE for
the PSP, the PSP owns a handful of keys (one will do really, I have not
fully grasped the idea when one would want traffic classes but ok, up to
8), and hundreds VFs work using this few (or one) keys, and the PF works
as well, just cannot know the IDE key (==cannot spy on VFs via something
like PCI bridge/retimer or logic analyzer). It is different than what
you are doing, DOE is the only common thing so far (or ever?).

btw the PSP is not able to initiate SPDM traffic by itself, when the
host decides it wants to setup IDE (via a PSP in SEV TIO), it talks to
the PSP which can return "I want to talk to the device, here are
req/resp buffers", in a loop, until the PSP returns something else.

> Dan and I discussed this off-list and Dan is arguing for lazy attestation,
> i.e. the TSM should only have the need to perform SPDM exchanges with
> the device when it is passed through.

Well, I'd expect that in most cases VF is going to be passed through and
IDE setup is done via PF which won't be passed through in such cases as
it has to manage VFs.

> So the host enumerates the DOE protocols

Yes.

> and authenticates the device.

No objection here. But PSP will need to rerun this, but still via the
host's DOE.

> When the device is passed through, patch 12/12 ensures that the host
> keeps its hands off of the device, thus affording the TSM exclusive
> SPDM control.

If a PF is passed through - I guess yes we could use that, but how is
this going to work for a VF?

> I agree that the commit message of 12/12 is utterly misleading in that
> it says "the guest" is granted exclusive control. It should say "the TSM"
> instead. (There might be implementations where the guest itself has
> the role of the TSM and authenticates the device on its own behalf,
> but PCIe r6.1 sec 11 uses the term "TSM" so that's what the commit
> message needs to use.)

This should work as long as DOE is still available (as of today).

> However apart from the necessary rewrite of the commit message and
> perhaps a rename of the PCI_CMA_OWNED_BY_GUEST flag, I think patch 12/12
> should already be doing exactly what you need -- provided that the
> PSP doesn't perform SPDM exchanges before passthrough. If it already
> performs them, say, on boot, I'd like to understand the reason.

In out design this does not have to happen on the host's boot. But I
wonder if some PF host driver authenticated some device and then we
create a bunch of VFs and pass the SPDM ownership of the PF to the PSP
to reauthentificate it again - the already running PF host driver may
become upset, may it? 12/12 assumes the host driver is VFIO-PCI but it
won't be, VFs will be bound to VFIO-PCI. Hope this all makes sense. Thanks,


>
> Thanks,
>
> Lukas

--
Alexey


2023-10-12 13:13:42

by Samuel Ortiz

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On Thu, Oct 12, 2023 at 11:15:42AM +0200, Lukas Wunner wrote:
> On Tue, Oct 10, 2023 at 03:07:41PM +1100, Alexey Kardashevskiy wrote:
> > But the way SPDM is done now is that if the user (as myself) wants to let
> > the firmware run SPDM - the only choice is disabling CONFIG_CMA completely
> > as CMA is not a (un)loadable module or built-in (with some "blacklist"
> > parameters), and does not provide a sysfs knob to control its tentacles.
> > Kinda harsh.
>
> On AMD SEV-TIO, does the PSP perform SPDM exchanges with a device
> *before* it is passed through to a guest? If so, why does it do that?

SPDM exchanges would be done with the DSM, i.e. through the PF, which is
typically *not* passed through to guests. VFs are.

The RISC-V CoVE-IO [1] spec follows similar flows as SEV-TIO (and to
some extend TDX-Connect) and expects the host to explicitly request the
TSM to establish an SPDM connection with the DSM (PF) before passing one
VF through a TSM managed guest. VFs would be vfio bound, not the PF, so
I think patch #12 does not solve our problem here.

> Dan and I discussed this off-list and Dan is arguing for lazy attestation,
> i.e. the TSM should only have the need to perform SPDM exchanges with
> the device when it is passed through.
>
> So the host enumerates the DOE protocols and authenticates the device.
> When the device is passed through, patch 12/12 ensures that the host
> keeps its hands off of the device, thus affording the TSM exclusive
> SPDM control.

Just to re-iterate: The TSM does not talk SPDM with the passed
through device(s), but with the corresponding PF. If the host kernel
owns the SPDM connection when the TSM initiates the SPDM connection with
the DSM (For IDE key setup), the connection establishment will fail.
Both CoVE-IO and SEV-TIO (Alexey, please correct me if I'm wrong)
expect the host to explicitly ask the TSM to establish that SPDM
connection. That request should somehow come from KVM, which then would
have to destroy the existing CMA/SPDM connection in order to give the
TSM a chance to successfully establish the SPDM link.

Cheers,
Samuel.

[1] https://github.com/riscv-non-isa/riscv-ap-tee-io/blob/main/specification/07-theory_operations.adoc
>

2023-10-12 15:09:46

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 07/12] spdm: Introduce library to authenticate devices

On Thu, 12 Oct 2023 09:16:29 +0200
Lukas Wunner <[email protected]> wrote:

> On Thu, Oct 12, 2023 at 03:26:44AM +0000, Alistair Francis wrote:
> > On Tue, 2023-10-03 at 15:39 +0100, Jonathan Cameron wrote:
> > > On Thu, 28 Sep 2023 19:32:37 +0200 Lukas Wunner <[email protected]> wrote:
> > > > This implementation supports SPDM 1.0 through 1.3 (the latest
> > > > version).
> > >
> > > I've no strong objection in allowing 1.0, but I think we do need
> > > to control min version accepted somehow as I'm not that keen to get
> > > security folk analyzing old version...
> >
> > Agreed. I'm not sure we even need to support 1.0
>
> According to PCIe r6.1 page 115 ("Reference Documents"):
>
> "CMA requires SPDM Version 1.0 or above. IDE requires SPDM Version 1.1
> or above. TDISP requires version 1.2 or above."
>
> This could be interpreted as SPDM 1.0 support being mandatory to be
> spec-compliant. Even if we drop support for 1.0 from the initial
> bringup patches, someone could later come along and propose a patch
> to re-add it on the grounds of the above-quoted spec section.
> So I think we can't avoid it.

I checked with some of our security folk and they didn't provide a
reason to avoid 1.0. It's not feature complete, but for what it does
it's fine. So given the PCI spec line you quote keep it for now.
We should be careful to require the newer versions for the additional
features though. Can address that when it's relevant.

Jonathan
>
> Thanks,
>
> Lukas
>

2023-10-12 15:16:01

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On Thu, 12 Oct 2023 14:00:00 +1100
Alexey Kardashevskiy <[email protected]> wrote:

> On 12/10/23 03:57, Jonathan Cameron wrote:
> > On Tue, 10 Oct 2023 23:53:16 +1100
> > Alexey Kardashevskiy <[email protected]> wrote:
> >
> >> On 10/10/23 19:19, Lukas Wunner wrote:
> >>> On Tue, Oct 10, 2023 at 03:07:41PM +1100, Alexey Kardashevskiy wrote:
> >>>> On 10/10/23 00:49, Lukas Wunner wrote:
> >>>>> PCI Firmware Spec would seem to be appropriate. However this can't
> >>>>> be solved by the kernel community.
> >>>>
> >>>> How so? It is up to the user to decide whether it is SPDM/CMA in the kernel
> >>>> or the firmware + coco, both are quite possible (it is IDE which is not
> >>>> possible without the firmware on AMD but we are not there yet).
> >>>
> >>> The user can control ownership of CMA-SPDM e.g. through a BIOS knob.
> >>> And that BIOS knob then influences the outcome of the _OSC negotiation
> >>> between platform and OS.
> >>>
> >>>
> >>>> But the way SPDM is done now is that if the user (as myself) wants to let
> >>>> the firmware run SPDM - the only choice is disabling CONFIG_CMA completely
> >>>> as CMA is not a (un)loadable module or built-in (with some "blacklist"
> >>>> parameters), and does not provide a sysfs knob to control its tentacles.
> >>>
> >>> The problem is every single vendor thinks they can come up with
> >>> their own idea of who owns the SPDM session:
> >>>
> >>> I've looked at the Nvidia driver and they've hacked libspdm into it,
> >>> so their idea is that the device driver owns the SPDM session.
> >> >
> >>> AMD wants the host to proxy DOE but not own the SPDM session.
> >> >
> >>> We have *standards* for a reason. So that products are interoperable.
> >>
> >> There is no "standard PCI ethernet device", somehow we survive ;)
> >>
> >>> If the kernel tries to accommodate to every vendor's idea of SPDM ownership
> >>> we'll end up with an unmaintainable mess of quirks, plus sysfs knobs
> >>> which were once intended as a stopgap but can never be removed because
> >>> they're userspace ABI.
> >>
> >> The host kernel needs to accommodate the idea that it is not trusted,
> >> and neither is the BIOS.
> >>
> >>> This needs to be solved in the *specification*.
> >> >
> >>> And the existing solution for who owns a particular PCI feature is _OSC.
> >>> Hence this needs to be taken up with the Firmware Working Group at the
> >>> PCISIG.
> >>
> >> I do like the general idea of specifying things, etc but this place does
> >> not sound right. The firmware you are talking about has full access to
> >> PCI, the PSP firmware does not have any (besides the IDE keys
> >> programming), is there any example of such firmware in the PCI Firmware
> >> spec? From the BIOS standpoint, the host OS owns DOE and whatever is
> >> sent over it (on AMD SEV TIO). The host OS chooses not to compose these
> >> SPDM packets itself (while it could) in order to be able to run guests
> >> without having them to trust the host OS.
> >
> > As a minimum I'd like to see something saying - "keep away from discovery
> > protocol on this DOE instance". An ACPI _OSC or _DSM or similar could do that.
> > It won't be needed for every approach, but it might for some.
>
> I am relying on the existing DOE code to do the discovery. No APCI in
> the SEV TIO picture.
>
> > Then either firmwware knows what to do, or a specific driver does.
> >
> > If your proxy comes up late enough that we've already done (and cached) discovery
> > protocols results then this might not be a problem for this particular
> > approach as we have no reason to rerun discovery (other than hotplug in which
> > case there is lots of other stuff to do anyway).
> >
> > For your case we need some hooks for the PSP to be able to drive the SPDM session
> > but that should be easy to allow.
>
> This is just a couple of calls:
> doe_md = pci_find_doe_mailbox(pdev, PCI_VENDOR_ID_PCI_SIG,
> PCI_DOE_PROTOCOL_SECURED_CMA_SPDM);
> and
> pci_doe(doe_mb, PCI_VENDOR_ID_PCI_SIG,
> PCI_DOE_PROTOCOL_SECURED_CMA_SPDM, ...)
>
>
> > I don't think precludes the hypervisor also
> > verifying the hardware is trusted by it along the way (though not used for IDE).
> > So if you are relying on a host OS proxy I don't thing you need to disable CONFIG_CMA
> > (maybe something around resets?)
>
> If I do the above 2 calls, then pdev->spdm_state will be out of sync.

Understood - might need a hand-off function call. Or put something
in the pci_find_doe_mailbox() - though currently we don't have a way to release
it again.

Or alternatively we might not care that it's out of sync. The host core code
doesn't need to know if you separately created an SPDM session.
Might just be a comment saying that variable is only indicating the state
of the host kernel managed spdm session - there might be others.

>
> > Potentially the host OS tries first (maybe succeeds - that doesn't matter though
> > nothing wrong with defense in depth) and then the PSP via a proxy does it all over
> > again which is fine. All we need to do is guarantee ordering and I think we are
> > fine for that.
>
> Only trusted bits go all over again, untrusted stuff such as discovery
> is still done by the host OS and PSP is not rerunning it.
>
>
> > Far too many possible models here but such is life I guess.
>
> True. When I joined the x86 world (quite recently), I was surprised how
> different AMD and Intel are in everything besides the userspace :)

:)

>
>
> >>>> Note, this PSP firmware is not BIOS (which runs on the same core and has
> >>>> same access to PCI as the host OS), it is a separate platform processor
> >>>> which only programs IDE keys to the PCI RC (via some some internal bus
> >>>> mechanism) but does not do anything on the bus itself and relies on the host
> >>>> OS proxying DOE, and there is no APCI between the core and the psp.
> >>>
> >>> Somewhat tangentially, would it be possible in your architecture
> >>> that the host or guest asks PSP to program IDE keys into the Root Port?
> >>
> >> Sure it is possible to implement. But this does not help our primary use
> >> case which is confidential VMs where the host OS is not trusted with the
> >> keys.
> >>
> >>> Or alternatively, access the key registers directly without PSP involvement?
> >>
> >> No afaik, for the reason above.
>
>

2023-10-12 15:25:21

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On Thu, 12 Oct 2023 22:18:27 +1100
Alexey Kardashevskiy <[email protected]> wrote:

> On 12/10/23 20:15, Lukas Wunner wrote:
> > On Tue, Oct 10, 2023 at 03:07:41PM +1100, Alexey Kardashevskiy wrote:
> >> But the way SPDM is done now is that if the user (as myself) wants to let
> >> the firmware run SPDM - the only choice is disabling CONFIG_CMA completely
> >> as CMA is not a (un)loadable module or built-in (with some "blacklist"
> >> parameters), and does not provide a sysfs knob to control its tentacles.
> >> Kinda harsh.
> >
> > On AMD SEV-TIO, does the PSP perform SPDM exchanges with a device
> > *before* it is passed through to a guest? If so, why does it do that?
>
> Yes, to set up IDE. SEV TIO is designed in a way that there is one
> stream == set of keys per the PF's traffic class.
>
> It is like this - imagine a TDISP+SRIOV device with hundreds VFs passed
> through to hundreds VMs. The host still owns the PF, provides DOE for
> the PSP, the PSP owns a handful of keys (one will do really, I have not
> fully grasped the idea when one would want traffic classes but ok, up to
> 8), and hundreds VFs work using this few (or one) keys, and the PF works
> as well, just cannot know the IDE key (==cannot spy on VFs via something
> like PCI bridge/retimer or logic analyzer). It is different than what
> you are doing, DOE is the only common thing so far (or ever?).
>
> btw the PSP is not able to initiate SPDM traffic by itself, when the
> host decides it wants to setup IDE (via a PSP in SEV TIO), it talks to
> the PSP which can return "I want to talk to the device, here are
> req/resp buffers", in a loop, until the PSP returns something else.
>
> > Dan and I discussed this off-list and Dan is arguing for lazy attestation,
> > i.e. the TSM should only have the need to perform SPDM exchanges with
> > the device when it is passed through.
>
> Well, I'd expect that in most cases VF is going to be passed through and
> IDE setup is done via PF which won't be passed through in such cases as
> it has to manage VFs.
>
> > So the host enumerates the DOE protocols
>
> Yes.
>
> > and authenticates the device.
>
> No objection here. But PSP will need to rerun this, but still via the
> host's DOE.
>
> > When the device is passed through, patch 12/12 ensures that the host
> > keeps its hands off of the device, thus affording the TSM exclusive
> > SPDM control.
>
> If a PF is passed through - I guess yes we could use that, but how is
> this going to work for a VF?
>
> > I agree that the commit message of 12/12 is utterly misleading in that
> > it says "the guest" is granted exclusive control. It should say "the TSM"
> > instead. (There might be implementations where the guest itself has
> > the role of the TSM and authenticates the device on its own behalf,
> > but PCIe r6.1 sec 11 uses the term "TSM" so that's what the commit
> > message needs to use.)
>
> This should work as long as DOE is still available (as of today).
>
> > However apart from the necessary rewrite of the commit message and
> > perhaps a rename of the PCI_CMA_OWNED_BY_GUEST flag, I think patch 12/12
> > should already be doing exactly what you need -- provided that the
> > PSP doesn't perform SPDM exchanges before passthrough. If it already
> > performs them, say, on boot, I'd like to understand the reason.
>
> In out design this does not have to happen on the host's boot. But I
> wonder if some PF host driver authenticated some device and then we
> create a bunch of VFs and pass the SPDM ownership of the PF to the PSP
> to reauthentificate it again - the already running PF host driver may
> become upset, may it? 12/12 assumes the host driver is VFIO-PCI but it
> won't be, VFs will be bound to VFIO-PCI. Hope this all makes sense. Thanks,

Without some experiments with real drivers, will be hard to be sure, but
I'd expect it to be fine as the host driver bound after attestation (or
what's the point?)
In this patch set attestation only happens again on a reset or kicking it
because of new certs. For reset, your PSP should be doing it all over again
anyway so that can happen after the host driver has dealt with the reset.
For the manual poking to retry attestation, if the model is we don't
load the driver until the attestation succeeds then that should be fine
(as driver not loaded).

The lock out needed for PF pass through doesn't apply given we are poking
it from the PSP via the host.

So I think patch 12 is irrelevant to your usecase rather than a problem.

May well be dragons in the corner cases. If we need a lockout for
after the PSP gets involved, then fair enough.

Jonathan

>
>
> >
> > Thanks,
> >
> > Lukas
>

2023-10-12 15:32:38

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On Thu, 12 Oct 2023 15:13:31 +0200
Samuel Ortiz <[email protected]> wrote:

> On Thu, Oct 12, 2023 at 11:15:42AM +0200, Lukas Wunner wrote:
> > On Tue, Oct 10, 2023 at 03:07:41PM +1100, Alexey Kardashevskiy wrote:
> > > But the way SPDM is done now is that if the user (as myself) wants to let
> > > the firmware run SPDM - the only choice is disabling CONFIG_CMA completely
> > > as CMA is not a (un)loadable module or built-in (with some "blacklist"
> > > parameters), and does not provide a sysfs knob to control its tentacles.
> > > Kinda harsh.
> >
> > On AMD SEV-TIO, does the PSP perform SPDM exchanges with a device
> > *before* it is passed through to a guest? If so, why does it do that?
>
> SPDM exchanges would be done with the DSM, i.e. through the PF, which is
> typically *not* passed through to guests. VFs are.
>
> The RISC-V CoVE-IO [1] spec follows similar flows as SEV-TIO (and to
> some extend TDX-Connect) and expects the host to explicitly request the
> TSM to establish an SPDM connection with the DSM (PF) before passing one
> VF through a TSM managed guest. VFs would be vfio bound, not the PF, so
> I think patch #12 does not solve our problem here.
>
> > Dan and I discussed this off-list and Dan is arguing for lazy attestation,
> > i.e. the TSM should only have the need to perform SPDM exchanges with
> > the device when it is passed through.
> >
> > So the host enumerates the DOE protocols and authenticates the device.
> > When the device is passed through, patch 12/12 ensures that the host
> > keeps its hands off of the device, thus affording the TSM exclusive
> > SPDM control.
>
> Just to re-iterate: The TSM does not talk SPDM with the passed
> through device(s), but with the corresponding PF. If the host kernel
> owns the SPDM connection when the TSM initiates the SPDM connection with
> the DSM (For IDE key setup), the connection establishment will fail.
> Both CoVE-IO and SEV-TIO (Alexey, please correct me if I'm wrong)
> expect the host to explicitly ask the TSM to establish that SPDM
> connection. That request should somehow come from KVM, which then would
> have to destroy the existing CMA/SPDM connection in order to give the
> TSM a chance to successfully establish the SPDM link.

Agreed - I don't see a problem with throwing away the initial connection.
In these cases you are passing that role on to another entity - the
job of this patch set is done.

I'm not clear yet if we need an explicit lock out similar to the VFIO
one for PF pass through or if everything will happen in a 'safe' order
anyway. I suspect a lockout on the ability to re attest is necessary
if the PF driver is loaded.

Perhaps just dropping the
+#if IS_ENABLED(CONFIG_VFIO_PCI_CORE)
and letting other PF drivers or another bit of core kernel code
(I'm not sure where the proxy resides for the models being discussed)
claim ownership is enough?

Jonathan

>
> Cheers,
> Samuel.
>
> [1] https://github.com/riscv-non-isa/riscv-ap-tee-io/blob/main/specification/07-theory_operations.adoc
> >
>

2023-10-13 05:03:15

by Samuel Ortiz

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication

On Thu, Oct 12, 2023 at 04:32:21PM +0100, Jonathan Cameron wrote:
> On Thu, 12 Oct 2023 15:13:31 +0200
> Samuel Ortiz <[email protected]> wrote:
>
> > On Thu, Oct 12, 2023 at 11:15:42AM +0200, Lukas Wunner wrote:
> > > On Tue, Oct 10, 2023 at 03:07:41PM +1100, Alexey Kardashevskiy wrote:
> > > > But the way SPDM is done now is that if the user (as myself) wants to let
> > > > the firmware run SPDM - the only choice is disabling CONFIG_CMA completely
> > > > as CMA is not a (un)loadable module or built-in (with some "blacklist"
> > > > parameters), and does not provide a sysfs knob to control its tentacles.
> > > > Kinda harsh.
> > >
> > > On AMD SEV-TIO, does the PSP perform SPDM exchanges with a device
> > > *before* it is passed through to a guest? If so, why does it do that?
> >
> > SPDM exchanges would be done with the DSM, i.e. through the PF, which is
> > typically *not* passed through to guests. VFs are.
> >
> > The RISC-V CoVE-IO [1] spec follows similar flows as SEV-TIO (and to
> > some extend TDX-Connect) and expects the host to explicitly request the
> > TSM to establish an SPDM connection with the DSM (PF) before passing one
> > VF through a TSM managed guest. VFs would be vfio bound, not the PF, so
> > I think patch #12 does not solve our problem here.
> >
> > > Dan and I discussed this off-list and Dan is arguing for lazy attestation,
> > > i.e. the TSM should only have the need to perform SPDM exchanges with
> > > the device when it is passed through.
> > >
> > > So the host enumerates the DOE protocols and authenticates the device.
> > > When the device is passed through, patch 12/12 ensures that the host
> > > keeps its hands off of the device, thus affording the TSM exclusive
> > > SPDM control.
> >
> > Just to re-iterate: The TSM does not talk SPDM with the passed
> > through device(s), but with the corresponding PF. If the host kernel
> > owns the SPDM connection when the TSM initiates the SPDM connection with
> > the DSM (For IDE key setup), the connection establishment will fail.
> > Both CoVE-IO and SEV-TIO (Alexey, please correct me if I'm wrong)
> > expect the host to explicitly ask the TSM to establish that SPDM
> > connection. That request should somehow come from KVM, which then would
> > have to destroy the existing CMA/SPDM connection in order to give the
> > TSM a chance to successfully establish the SPDM link.
>
> Agreed - I don't see a problem with throwing away the initial connection.
> In these cases you are passing that role on to another entity - the
> job of this patch set is done.

Right. As long as there's a way for the kernel to explicitly drop that
ownership before calling into the TSM for asking it to create a new SPDM
connection, we should be fine. Alexey, would you agree with that
statement?

> I'm not clear yet if we need an explicit lock out similar to the VFIO
> one for PF pass through or if everything will happen in a 'safe' order
> anyway. I suspect a lockout on the ability to re attest is necessary
> if the PF driver is loaded.
>
> Perhaps just dropping the
> +#if IS_ENABLED(CONFIG_VFIO_PCI_CORE)
> and letting other PF drivers or another bit of core kernel code
> (I'm not sure where the proxy resides for the models being discussed)
> claim ownership is enough?

If we agree that other parts of the kernel (I suspect KVM would do the
"Connect to device" call to the TSM) should be able to tear the
established SPDM connection, then yes, the claim/return_ownership() API
should not be only available to VFIO.

Cheers,
Samuel.

2023-10-13 11:46:21

by Alexey Kardashevskiy

[permalink] [raw]
Subject: Re: [PATCH 00/12] PCI device authentication


On 13/10/23 16:03, Samuel Ortiz wrote:
> On Thu, Oct 12, 2023 at 04:32:21PM +0100, Jonathan Cameron wrote:
>> On Thu, 12 Oct 2023 15:13:31 +0200
>> Samuel Ortiz <[email protected]> wrote:
>>
>>> On Thu, Oct 12, 2023 at 11:15:42AM +0200, Lukas Wunner wrote:
>>>> On Tue, Oct 10, 2023 at 03:07:41PM +1100, Alexey Kardashevskiy wrote:
>>>>> But the way SPDM is done now is that if the user (as myself) wants to let
>>>>> the firmware run SPDM - the only choice is disabling CONFIG_CMA completely
>>>>> as CMA is not a (un)loadable module or built-in (with some "blacklist"
>>>>> parameters), and does not provide a sysfs knob to control its tentacles.
>>>>> Kinda harsh.
>>>>
>>>> On AMD SEV-TIO, does the PSP perform SPDM exchanges with a device
>>>> *before* it is passed through to a guest? If so, why does it do that?
>>>
>>> SPDM exchanges would be done with the DSM, i.e. through the PF, which is
>>> typically *not* passed through to guests. VFs are.
>>>
>>> The RISC-V CoVE-IO [1] spec follows similar flows as SEV-TIO (and to
>>> some extend TDX-Connect) and expects the host to explicitly request the
>>> TSM to establish an SPDM connection with the DSM (PF) before passing one
>>> VF through a TSM managed guest. VFs would be vfio bound, not the PF, so
>>> I think patch #12 does not solve our problem here.
>>>
>>>> Dan and I discussed this off-list and Dan is arguing for lazy attestation,
>>>> i.e. the TSM should only have the need to perform SPDM exchanges with
>>>> the device when it is passed through.
>>>>
>>>> So the host enumerates the DOE protocols and authenticates the device.
>>>> When the device is passed through, patch 12/12 ensures that the host
>>>> keeps its hands off of the device, thus affording the TSM exclusive
>>>> SPDM control.
>>>
>>> Just to re-iterate: The TSM does not talk SPDM with the passed
>>> through device(s), but with the corresponding PF. If the host kernel
>>> owns the SPDM connection when the TSM initiates the SPDM connection with
>>> the DSM (For IDE key setup), the connection establishment will fail.
>>> Both CoVE-IO and SEV-TIO (Alexey, please correct me if I'm wrong)
>>> expect the host to explicitly ask the TSM to establish that SPDM
>>> connection. That request should somehow come from KVM, which then would
>>> have to destroy the existing CMA/SPDM connection in order to give the
>>> TSM a chance to successfully establish the SPDM link.
>>
>> Agreed - I don't see a problem with throwing away the initial connection.
>> In these cases you are passing that role on to another entity - the
>> job of this patch set is done.
>
> Right. As long as there's a way for the kernel to explicitly drop that
> ownership before calling into the TSM for asking it to create a new SPDM
> connection, we should be fine. Alexey, would you agree with that
> statement?

Yes, sounds right.

>> I'm not clear yet if we need an explicit lock out similar to the VFIO
>> one for PF pass through or if everything will happen in a 'safe' order
>> anyway. I suspect a lockout on the ability to re attest is necessary
>> if the PF driver is loaded.
>>
>> Perhaps just dropping the
>> +#if IS_ENABLED(CONFIG_VFIO_PCI_CORE)
>> and letting other PF drivers or another bit of core kernel code
>> (I'm not sure where the proxy resides for the models being discussed)
>> claim ownership is enough?
>
> If we agree that other parts of the kernel (I suspect KVM would do the
> "Connect to device" call to the TSM) should be able to tear the
> established SPDM connection, then yes, the claim/return_ownership() API
> should not be only available to VFIO.

Correct. I just want to make sure that DOE mailboxes stay alive and
nothing in the host kernel relies on SPDM being still active after
ownership is transferred to the TSM==PSP.

>
> Cheers,
> Samuel.

--
Alexey


2023-10-18 19:58:26

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH 12/12] PCI/CMA: Grant guests exclusive control of authentication

Jonathan Cameron wrote:
> On Tue, 3 Oct 2023 21:30:58 +0200
> Lukas Wunner <[email protected]> wrote:
>
> > On Tue, Oct 03, 2023 at 04:40:48PM +0100, Jonathan Cameron wrote:
> > > On Thu, 28 Sep 2023 19:32:42 +0200 Lukas Wunner <[email protected]> wrote:
> > > > At any given time, only a single entity in a physical system may have
> > > > an SPDM connection to a device. That's because the GET_VERSION request
> > > > (which begins an authentication sequence) resets "the connection and all
> > > > context associated with that connection" (SPDM 1.3.0 margin no 158).
> > > >
> > > > Thus, when a device is passed through to a guest and the guest has
> > > > authenticated it, a subsequent authentication by the host would reset
> > > > the device's CMA-SPDM session behind the guest's back.
> > > >
> > > > Prevent by letting the guest claim exclusive CMA ownership of the device
> > > > during passthrough. Refuse CMA reauthentication on the host as long.
> > > > After passthrough has concluded, reauthenticate the device on the host.
> > >
> > > Is there anything stopping a PF presenting multiple CMA capable DOE
> > > instances? I'd expect them to have their own contexts if they do..
> >
> > The spec does not seem to *explicitly* forbid a PF having multiple
> > CMA-capable DOE instances, but PCIe r6.1 sec 6.31.3 says:
> > "The instance of DOE used for CMA-SPDM must support ..."
> >
> > Note the singular ("The instance"). It seems to suggest that the
> > spec authors assumed there's only a single DOE instance for CMA-SPDM.
>
> It's a little messy and a bit of American vs British English I think.
> If it said
> "The instance of DOE used for a specific CMA-SPDM must support..."
> then it would clearly allow multiple instances. However, conversely,
> I don't read that sentence as blocking multiple instances (even though
> I suspect you are right and the author was thinking of there being one).
>
> >
> > Could you (as an English native speaker) comment on the clarity of the
> > two sentences "Prevent ... as long." above, as Ilpo objected to them?
> >
> > The antecedent of "Prevent" is the undesirable behaviour in the preceding
> > sentence (host resets guest's SPDM connection).
> >
> > The antecedent of "as long" is "during passthrough" in the preceding
> > sentence.
> >
> > Is that clear and understandable for an English native speaker or
> > should I rephrase?
>
> Not clear enough to me as it stands. That "as long" definitely feels
> like there is more to follow it as Ilpo noted.
>
> Maybe reword as something like
>
> Prevent this by letting the guest claim exclusive ownership of the device
> during passthrough ensuring problematic CMA reauthentication by the host
> is blocked.

My contribution to the prose here is to clarify that this mechanism is
less about "appoint the guest as the exslusive owner" and more about
"revoke the bare-metal host as the authentication owner".

In fact I don't see how the guest can ever claim to "own" CMA since
config-space is always emulated to the guest. So the guest will always
be in a situation where it needs to proxy SPDM related operations. The
proxy is either terminated in the host as native SPDM on behalf of the
guest, or further proxied to the platform-TSM.

So let's just clarify that at assignment, host control is revoked, and
the guest is afforded the opportunity to re-establish authentication
either by asking the host re-authenticate on the guest's behalf, or
asking the platform-tsm to authenticate the device on the guest's
behalf.

...and even there the guest does not know if it is accessing a 1:1 VF:PF
device representation, or one VF instance of PF where the PF
authentication answer only occurs once for all potential VFs.

Actually, that brings up a question about when to revoke host
authentication in the VF assignment case? That seems to be a policy
decision that the host needs to make globally for all VFs of a PF. If
the guest is going to opt-in to relying on the host's authentication
decision then the revoking early may not make sense. It may be a
decision that needs to be deferred until the guest makes its intentions
clear, and the host will need to have policy around how to resolve
conflicts between guestA wants "native" and guestB wants "platform-TSM".
If the VFs those guests are using map to the same PF then only one
policy can be in effect.

2023-10-19 07:59:25

by Alexey Kardashevskiy

[permalink] [raw]
Subject: Re: [PATCH 12/12] PCI/CMA: Grant guests exclusive control of authentication


On 19/10/23 06:58, Dan Williams wrote:
> Jonathan Cameron wrote:
>> On Tue, 3 Oct 2023 21:30:58 +0200
>> Lukas Wunner <[email protected]> wrote:
>>
>>> On Tue, Oct 03, 2023 at 04:40:48PM +0100, Jonathan Cameron wrote:
>>>> On Thu, 28 Sep 2023 19:32:42 +0200 Lukas Wunner <[email protected]> wrote:
>>>>> At any given time, only a single entity in a physical system may have
>>>>> an SPDM connection to a device. That's because the GET_VERSION request
>>>>> (which begins an authentication sequence) resets "the connection and all
>>>>> context associated with that connection" (SPDM 1.3.0 margin no 158).
>>>>>
>>>>> Thus, when a device is passed through to a guest and the guest has
>>>>> authenticated it, a subsequent authentication by the host would reset
>>>>> the device's CMA-SPDM session behind the guest's back.
>>>>>
>>>>> Prevent by letting the guest claim exclusive CMA ownership of the device
>>>>> during passthrough. Refuse CMA reauthentication on the host as long.
>>>>> After passthrough has concluded, reauthenticate the device on the host.
>>>>
>>>> Is there anything stopping a PF presenting multiple CMA capable DOE
>>>> instances? I'd expect them to have their own contexts if they do..
>>>
>>> The spec does not seem to *explicitly* forbid a PF having multiple
>>> CMA-capable DOE instances, but PCIe r6.1 sec 6.31.3 says:
>>> "The instance of DOE used for CMA-SPDM must support ..."
>>>
>>> Note the singular ("The instance"). It seems to suggest that the
>>> spec authors assumed there's only a single DOE instance for CMA-SPDM.
>>
>> It's a little messy and a bit of American vs British English I think.
>> If it said
>> "The instance of DOE used for a specific CMA-SPDM must support..."
>> then it would clearly allow multiple instances. However, conversely,
>> I don't read that sentence as blocking multiple instances (even though
>> I suspect you are right and the author was thinking of there being one).
>>
>>>
>>> Could you (as an English native speaker) comment on the clarity of the
>>> two sentences "Prevent ... as long." above, as Ilpo objected to them?
>>>
>>> The antecedent of "Prevent" is the undesirable behaviour in the preceding
>>> sentence (host resets guest's SPDM connection).
>>>
>>> The antecedent of "as long" is "during passthrough" in the preceding
>>> sentence.
>>>
>>> Is that clear and understandable for an English native speaker or
>>> should I rephrase?
>>
>> Not clear enough to me as it stands. That "as long" definitely feels
>> like there is more to follow it as Ilpo noted.
>>
>> Maybe reword as something like
>>
>> Prevent this by letting the guest claim exclusive ownership of the device
>> during passthrough ensuring problematic CMA reauthentication by the host
>> is blocked.
>
> My contribution to the prose here is to clarify that this mechanism is
> less about "appoint the guest as the exslusive owner" and more about
> "revoke the bare-metal host as the authentication owner".
>
> In fact I don't see how the guest can ever claim to "own" CMA since
> config-space is always emulated to the guest.

No difference to the PSP and the baremetal linux for this matter as the
PSP does not have direct access to the config space either.

> So the guest will always
> be in a situation where it needs to proxy SPDM related operations. The
> proxy is either terminated in the host as native SPDM on behalf of the
> guest, or further proxied to the platform-TSM.
>
> So let's just clarify that at assignment, host control is revoked, and
> the guest is afforded the opportunity to re-establish authentication
> either by asking the host re-authenticate on the guest's behalf, or
> asking the platform-tsm to authenticate the device on the guest's
> behalf.
> ...and even there the guest does not know if it is accessing a 1:1 VF:PF
> device representation, or one VF instance of PF where the PF
> authentication answer only occurs once for all potential VFs.
>
> Actually, that brings up a question about when to revoke host
> authentication in the VF assignment case? That seems to be a policy
> decision that the host needs to make globally for all VFs of a PF. If
> the guest is going to opt-in to relying on the host's authentication
> decision then the revoking early may not make sense.

> It may be a
> decision that needs to be deferred until the guest makes its intentions
> clear, and the host will need to have policy around how to resolve
> conflicts between guestA wants "native" and guestB wants "platform-TSM".
> If the VFs those guests are using map to the same PF then only one
> policy can be in effect.

To own IDE, the guest will have to have exclusive access to the portion
of RC responsible for the IDE keys. Which is doable but requires passing
through both RC and the device and probably everything between these
two. It is going to be quite different "host-native" and
"guest-native". How IDE keys are going to be programmed into the RC on
Intel?


--
Alexey


2023-10-24 17:04:57

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH 12/12] PCI/CMA: Grant guests exclusive control of authentication

Alexey Kardashevskiy wrote:
[..]
> To own IDE, the guest will have to have exclusive access to the portion
> of RC responsible for the IDE keys. Which is doable but requires passing
> through both RC and the device and probably everything between these
> two. It is going to be quite different "host-native" and
> "guest-native". How IDE keys are going to be programmed into the RC on
> Intel?

I do not think the guest can "own IDE" in any meaningful. It is always
going to be a PF level policy coordinated either by the host or the
platform-TSM, and as far as I can see all end user interest currently
lies in the platform-TSM case.

Now, there is definitely value in considering how a guest can maximize
security in the absence of a platform-TSM in the code design, but that
does not diminish the need for a path for the guest to coordinate the
life-cycle through the platform-TSM. Otherwise, as you mention, passing
through the host-bridge resources and the VF has challenges.

2024-02-04 17:25:34

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 07/12] spdm: Introduce library to authenticate devices

On Tue, Oct 03, 2023 at 03:39:37PM +0100, Jonathan Cameron wrote:
> On Thu, 28 Sep 2023 19:32:37 +0200 Lukas Wunner <[email protected]> wrote:
> > +/**
> > + * spdm_challenge_rsp_sz() - Calculate CHALLENGE_AUTH response size
> > + *
> > + * @spdm_state: SPDM session state
> > + * @rsp: CHALLENGE_AUTH response (optional)
> > + *
> > + * A CHALLENGE_AUTH response contains multiple variable-length fields
> > + * as well as optional fields. This helper eases calculating its size.
> > + *
> > + * If @rsp is %NULL, assume the maximum OpaqueDataLength of 1024 bytes
> > + * (SPDM 1.0.0 table 21). Otherwise read OpaqueDataLength from @rsp.
> > + * OpaqueDataLength can only be > 0 for SPDM 1.0 and 1.1, as they lack
> > + * the OtherParamsSupport field in the NEGOTIATE_ALGORITHMS request.
> > + * For SPDM 1.2+, we do not offer any Opaque Data Formats in that field,
> > + * which forces OpaqueDataLength to 0 (SPDM 1.2.0 margin no 261).
> > + */
> > +static size_t spdm_challenge_rsp_sz(struct spdm_state *spdm_state,
> > + struct spdm_challenge_rsp *rsp)
> > +{
> > + size_t size = sizeof(*rsp) /* Header */
>
> Double spaces look a bit strange...
>
> > + + spdm_state->h /* CertChainHash */
> > + + 32; /* Nonce */
> > +
> > + if (rsp)
> > + /* May be unaligned if hash algorithm has unusual length. */
> > + size += get_unaligned_le16((u8 *)rsp + size);
> > + else
> > + size += SPDM_MAX_OPAQUE_DATA; /* OpaqueData */
> > +
> > + size += 2; /* OpaqueDataLength */
> > +
> > + if (spdm_state->version >= 0x13)
> > + size += 8; /* RequesterContext */
> > +
> > + return size + spdm_state->s; /* Signature */
>
> Double space here as well looks odd to me.

This was criticized by Ilpo as well, but the double spaces are
intentional to vertically align "size" on each line for neatness.

How strongly do you guys feel about it? ;)


> > +int spdm_authenticate(struct spdm_state *spdm_state)
> > +{
> > + size_t transcript_sz;
> > + void *transcript;
> > + int rc = -ENOMEM;
> > + u8 slot;
> > +
> > + mutex_lock(&spdm_state->lock);
> > + spdm_reset(spdm_state);
[...]
> > + rc = spdm_challenge(spdm_state, slot);
> > +
> > +unlock:
> > + if (rc)
> > + spdm_reset(spdm_state);
>
> I'd expect reset to also clear authenticated. Seems odd to do it separately
> and relies on reset only being called here. If that were the case and you
> were handling locking and freeing using cleanup.h magic, then
>
> rc = spdm_challenge(spdm_state);
> if (rc)
> goto reset;
> return 0;
>
> reset:
> spdm_reset(spdm_state);

Unfortunately clearing "authenticated" in spdm_reset() is not an
option:

Note that spdm_reset() is also called at the top of spdm_authenticate().

If the device was previously successfully authenticated and is now
re-authenticated successfully, clearing "authenticated" in spdm_reset()
would cause the flag to be briefly set to false, which may irritate
user space inspecting the sysfs attribute at just the wrong moment.

If the device was previously successfully authenticated and is
re-authenticated successfully, I want the "authenticated" attribute
to show "true" without any gaps. Hence it's only cleared at the end
of spdm_authenticate() if there was an error.

I agree with all your other review feedback and have amended the
patch accordingly. Thanks a lot!

Lukas

2024-02-05 10:39:21

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH 07/12] spdm: Introduce library to authenticate devices

On Sun, 4 Feb 2024 18:25:10 +0100
Lukas Wunner <[email protected]> wrote:

> On Tue, Oct 03, 2023 at 03:39:37PM +0100, Jonathan Cameron wrote:
> > On Thu, 28 Sep 2023 19:32:37 +0200 Lukas Wunner <[email protected]> wrote:
> > > +/**
> > > + * spdm_challenge_rsp_sz() - Calculate CHALLENGE_AUTH response size
> > > + *
> > > + * @spdm_state: SPDM session state
> > > + * @rsp: CHALLENGE_AUTH response (optional)
> > > + *
> > > + * A CHALLENGE_AUTH response contains multiple variable-length fields
> > > + * as well as optional fields. This helper eases calculating its size.
> > > + *
> > > + * If @rsp is %NULL, assume the maximum OpaqueDataLength of 1024 bytes
> > > + * (SPDM 1.0.0 table 21). Otherwise read OpaqueDataLength from @rsp.
> > > + * OpaqueDataLength can only be > 0 for SPDM 1.0 and 1.1, as they lack
> > > + * the OtherParamsSupport field in the NEGOTIATE_ALGORITHMS request.
> > > + * For SPDM 1.2+, we do not offer any Opaque Data Formats in that field,
> > > + * which forces OpaqueDataLength to 0 (SPDM 1.2.0 margin no 261).
> > > + */
> > > +static size_t spdm_challenge_rsp_sz(struct spdm_state *spdm_state,
> > > + struct spdm_challenge_rsp *rsp)
> > > +{
> > > + size_t size = sizeof(*rsp) /* Header */
> >
> > Double spaces look a bit strange...
> >
> > > + + spdm_state->h /* CertChainHash */
> > > + + 32; /* Nonce */
> > > +
> > > + if (rsp)
> > > + /* May be unaligned if hash algorithm has unusual length. */
> > > + size += get_unaligned_le16((u8 *)rsp + size);
> > > + else
> > > + size += SPDM_MAX_OPAQUE_DATA; /* OpaqueData */
> > > +
> > > + size += 2; /* OpaqueDataLength */
> > > +
> > > + if (spdm_state->version >= 0x13)
> > > + size += 8; /* RequesterContext */
> > > +
> > > + return size + spdm_state->s; /* Signature */
> >
> > Double space here as well looks odd to me.
>
> This was criticized by Ilpo as well, but the double spaces are
> intentional to vertically align "size" on each line for neatness.
>
> How strongly do you guys feel about it? ;)

I suspect we'll see 'fixes' for this creating noise for maintainers.
So whilst I don't feel that strongly about it I'm not sure the alignment
really helps much with readability either.

>
>
> > > +int spdm_authenticate(struct spdm_state *spdm_state)
> > > +{
> > > + size_t transcript_sz;
> > > + void *transcript;
> > > + int rc = -ENOMEM;
> > > + u8 slot;
> > > +
> > > + mutex_lock(&spdm_state->lock);
> > > + spdm_reset(spdm_state);
> [...]
> > > + rc = spdm_challenge(spdm_state, slot);
> > > +
> > > +unlock:
> > > + if (rc)
> > > + spdm_reset(spdm_state);
> >
> > I'd expect reset to also clear authenticated. Seems odd to do it separately
> > and relies on reset only being called here. If that were the case and you
> > were handling locking and freeing using cleanup.h magic, then
> >
> > rc = spdm_challenge(spdm_state);
> > if (rc)
> > goto reset;
> > return 0;
> >
> > reset:
> > spdm_reset(spdm_state);
>
> Unfortunately clearing "authenticated" in spdm_reset() is not an
> option:
>
> Note that spdm_reset() is also called at the top of spdm_authenticate().
>
> If the device was previously successfully authenticated and is now
> re-authenticated successfully, clearing "authenticated" in spdm_reset()
> would cause the flag to be briefly set to false, which may irritate
> user space inspecting the sysfs attribute at just the wrong moment.

That makes sense. Thanks.

>
> If the device was previously successfully authenticated and is
> re-authenticated successfully, I want the "authenticated" attribute
> to show "true" without any gaps. Hence it's only cleared at the end
> of spdm_authenticate() if there was an error.
>
> I agree with all your other review feedback and have amended the
> patch accordingly. Thanks a lot!
>
> Lukas
>


2024-02-09 20:32:18

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 07/12] spdm: Introduce library to authenticate devices

On Tue, Oct 03, 2023 at 01:35:26PM +0300, Ilpo J?rvinen wrote:
> On Thu, 28 Sep 2023, Lukas Wunner wrote:
> > +typedef int (spdm_transport)(void *priv, struct device *dev,
> > + const void *request, size_t request_sz,
> > + void *response, size_t response_sz);
>
> This returns a length or an error, right? If so return ssize_t instead.
>
> If you make this change, alter the caller types too.

Alright, I've changed the types in __spdm_exchange() and spdm_exchange().

However the callers of those functions assign the result to an "rc" variable
which is also used to receive an "int" return value.
E.g. spdm_get_digests() assigns the ssize_t result of spdm_exchange() to rc
but also the int result of crypto_shash_update().

It feels awkward to change the type of "rc" to "ssize_t" in those
functions, so I kept "int".


> > +} __packed;
> > +
> > +#define SPDM_GET_CAPABILITIES 0xE1
>
> There's non-capital hex later in the file, please try to be consistent.

The spec uses capital hex characters, so this was done to ease
connecting the implementation to the spec.

OTOH I don't want to capitalize all the hex codes in enum spdm_error_code.

So I guess consistency takes precedence and I've amended the
patch to downcase all hex characters, as you've requested.


> > +struct spdm_error_rsp {
> > + u8 version;
> > + u8 code;
> > + enum spdm_error_code error_code:8;
> > + u8 error_data;
> > +
> > + u8 extended_error_data[];
> > +} __packed;
>
> Is this always going to produce the layout you want given the alignment
> requirements for the storage unit for u8 and enum are probably different?

Yes, the __packed attribute forces the compiler to avoid padding.


> > + spdm_state->responder_caps = le32_to_cpu(rsp->flags);
>
> Earlier, unaligned accessors where used with the version_number_entries.
> Is it intentional they're not used here (I cannot see what would be
> reason for this difference)?

Thanks, good catch. Indeed this is not necessarily naturally aligned
because the GET_CAPABILITIES request and response succeeds the
GET_VERSION response in the same allocation. And the GET_VERSION
response size is a multiple of 2, but not always a multiple of 4.

So I've amended the patch to use a separate allocation for the
GET_CAPABILITIES request and response. The spec-defined struct layout
of those messages is such that the 32-bit accesses are indeed always
naturally aligned.

The existing unaligned accessor in spdm_get_version() turned out
to be unnecessary after taking a closer look, so I dropped that one.


> > +static int spdm_negotiate_algs(struct spdm_state *spdm_state,
> > + void *transcript, size_t transcript_sz)
> > +{
> > + struct spdm_req_alg_struct *req_alg_struct;
> > + struct spdm_negotiate_algs_req *req;
> > + struct spdm_negotiate_algs_rsp *rsp;
> > + size_t req_sz = sizeof(*req);
> > + size_t rsp_sz = sizeof(*rsp);
> > + int rc, length;
> > +
> > + /* Request length shall be <= 128 bytes (SPDM 1.1.0 margin no 185) */
> > + BUILD_BUG_ON(req_sz > 128);
>
> I don't know why this really has to be here? This could be static_assert()
> below the struct declaration.

A follow-on patch to add key exchange support increases req_sz based on
an SPDM_MAX_REQ_ALG_STRUCT macro defined here in front of the function
where it's used. That's the reason why the size is checked here as well.


> > +static int spdm_get_certificate(struct spdm_state *spdm_state, u8 slot)
> > +{
> > + struct spdm_get_certificate_req req = {
> > + .code = SPDM_GET_CERTIFICATE,
> > + .param1 = slot,
> > + };
> > + struct spdm_get_certificate_rsp *rsp;
> > + struct spdm_cert_chain *certs = NULL;
> > + size_t rsp_sz, total_length, header_length;
> > + u16 remainder_length = 0xffff;
>
> 0xffff in this function should use either U16_MAX or SZ_64K - 1.

The SPDM spec uses 0xffff so I'm deliberately using that as well
to make the connection to the spec obvious.


> > +static void spdm_create_combined_prefix(struct spdm_state *spdm_state,
> > + const char *spdm_context, void *buf)
> > +{
> > + u8 minor = spdm_state->version & 0xf;
> > + u8 major = spdm_state->version >> 4;
> > + size_t len = strlen(spdm_context);
> > + int rc, zero_pad;
> > +
> > + rc = snprintf(buf, SPDM_PREFIX_SZ + 1,
> > + "dmtf-spdm-v%hhx.%hhx.*dmtf-spdm-v%hhx.%hhx.*"
> > + "dmtf-spdm-v%hhx.%hhx.*dmtf-spdm-v%hhx.%hhx.*",
> > + major, minor, major, minor, major, minor, major, minor);
>
> Why are these using s8 formatting specifier %hhx ??

I don't quite follow, "%hhx" is an unsigned char, not a signed char.

spdm_state->version may contain e.g. 0x12 which is converted to
"dmtf-spdm-v1.2.*" here.

The question is what happens if the major or minor version goes beyond 9.
The total length of the prefix is hard-coded by the spec, hence my
expectation is that 1.10 will be represented as "dmtf-spdm-v1.a.*"
to not exceed the length. The code follows that expectation.

Thanks for taking a look! I've amended the patch to take all your
other feedback into account.

Lukas

2024-02-12 11:47:37

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH 07/12] spdm: Introduce library to authenticate devices

On Fri, 9 Feb 2024, Lukas Wunner wrote:

> On Tue, Oct 03, 2023 at 01:35:26PM +0300, Ilpo J?rvinen wrote:
> > On Thu, 28 Sep 2023, Lukas Wunner wrote:
> > > +typedef int (spdm_transport)(void *priv, struct device *dev,
> > > + const void *request, size_t request_sz,
> > > + void *response, size_t response_sz);
> >
> > This returns a length or an error, right? If so return ssize_t instead.
> >
> > If you make this change, alter the caller types too.
>
> Alright, I've changed the types in __spdm_exchange() and spdm_exchange().
>
> However the callers of those functions assign the result to an "rc" variable
> which is also used to receive an "int" return value.
> E.g. spdm_get_digests() assigns the ssize_t result of spdm_exchange() to rc
> but also the int result of crypto_shash_update().
>
> It feels awkward to change the type of "rc" to "ssize_t" in those
> functions, so I kept "int".

Using ssize_t type variable for return values is not that uncommon (kernel
wide). Obviously that results in int -> ssize_t conversion if they call
any function that only needs to return an int. But it seems harmless.

crypto_shash_update() doesn't input size_t like (spdm_transport)() does.

> > > +struct spdm_error_rsp {
> > > + u8 version;
> > > + u8 code;
> > > + enum spdm_error_code error_code:8;
> > > + u8 error_data;
> > > +
> > > + u8 extended_error_data[];
> > > +} __packed;
> >
> > Is this always going to produce the layout you want given the alignment
> > requirements for the storage unit for u8 and enum are probably different?
>
> Yes, the __packed attribute forces the compiler to avoid padding.

Okay, so I assume compiler is actually able put enum with u8, seemingly
bitfield code generation has gotten better than it used to be.

With how little is promised wordings in the spec (unless there is later
update I've not seen), I'd suggest you still add a static_assert for the
sizeof of the struct to make sure it is always of correct size.
Mislayouting is much easier to catch on build time.

> > > +static int spdm_negotiate_algs(struct spdm_state *spdm_state,
> > > + void *transcript, size_t transcript_sz)
> > > +{
> > > + struct spdm_req_alg_struct *req_alg_struct;
> > > + struct spdm_negotiate_algs_req *req;
> > > + struct spdm_negotiate_algs_rsp *rsp;
> > > + size_t req_sz = sizeof(*req);
> > > + size_t rsp_sz = sizeof(*rsp);
> > > + int rc, length;
> > > +
> > > + /* Request length shall be <= 128 bytes (SPDM 1.1.0 margin no 185) */
> > > + BUILD_BUG_ON(req_sz > 128);
> >
> > I don't know why this really has to be here? This could be static_assert()
> > below the struct declaration.
>
> A follow-on patch to add key exchange support increases req_sz based on
> an SPDM_MAX_REQ_ALG_STRUCT macro defined here in front of the function
> where it's used. That's the reason why the size is checked here as well.

Okay, understood. I didn't go that in my analysis so I missed the later
addition.

> > > +static int spdm_get_certificate(struct spdm_state *spdm_state, u8 slot)
> > > +{
> > > + struct spdm_get_certificate_req req = {
> > > + .code = SPDM_GET_CERTIFICATE,
> > > + .param1 = slot,
> > > + };
> > > + struct spdm_get_certificate_rsp *rsp;
> > > + struct spdm_cert_chain *certs = NULL;
> > > + size_t rsp_sz, total_length, header_length;
> > > + u16 remainder_length = 0xffff;
> >
> > 0xffff in this function should use either U16_MAX or SZ_64K - 1.
>
> The SPDM spec uses 0xffff so I'm deliberately using that as well
> to make the connection to the spec obvious.

It's not obvious when somebody is reading 0xffff. If you want to make the
connection obvious, you create a proper #define + add a comment where its
defined with the spec ref.

> > > +static void spdm_create_combined_prefix(struct spdm_state *spdm_state,
> > > + const char *spdm_context, void *buf)
> > > +{
> > > + u8 minor = spdm_state->version & 0xf;
> > > + u8 major = spdm_state->version >> 4;
> > > + size_t len = strlen(spdm_context);
> > > + int rc, zero_pad;
> > > +
> > > + rc = snprintf(buf, SPDM_PREFIX_SZ + 1,
> > > + "dmtf-spdm-v%hhx.%hhx.*dmtf-spdm-v%hhx.%hhx.*"
> > > + "dmtf-spdm-v%hhx.%hhx.*dmtf-spdm-v%hhx.%hhx.*",
> > > + major, minor, major, minor, major, minor, major, minor);
> >
> > Why are these using s8 formatting specifier %hhx ??
>
> I don't quite follow, "%hhx" is an unsigned char, not a signed char.
>
> spdm_state->version may contain e.g. 0x12 which is converted to
> "dmtf-spdm-v1.2.*" here.
>
> The question is what happens if the major or minor version goes beyond 9.
> The total length of the prefix is hard-coded by the spec, hence my
> expectation is that 1.10 will be represented as "dmtf-spdm-v1.a.*"
> to not exceed the length. The code follows that expectation.

It's actually fine.

I just got tunnel vision when looking what that %hhx is in the first
place, in Documentation/core-api/printk-formats.rst there's this list:

signed char %d or %hhx
unsigned char %u or %x

But of course %hhx is just as valid for unsigned.

--
i.

2024-03-04 06:57:15

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 03/12] X.509: Move certificate length retrieval into new helper

On Fri, Oct 06, 2023 at 12:15:13PM -0700, Dan Williams wrote:
> Lukas Wunner wrote:
> > The upcoming in-kernel SPDM library (Security Protocol and Data Model,
> > https://www.dmtf.org/dsp/DSP0274) needs to retrieve the length from
> > ASN.1 DER-encoded X.509 certificates.
> >
> > Such code already exists in x509_load_certificate_list(), so move it
> > into a new helper for reuse by SPDM.
[...]
> > +EXPORT_SYMBOL_GPL(x509_get_certificate_length);
>
> Given CONFIG_PCI is a bool, is the export needed? Maybe save this export
> until the modular consumer arrives, or identify the modular consumer in the
> changelog?

The x509_get_certificate_length() helper introduced by this patch
isn't needed directly by the PCI core, but by the SPDM library.

The SPDM library is tristate and is selected by CONFIG_PCI_CMA,
which is indeed bool.

However SCSI and ATA (both tristate) have explicitly expressed an
interest to use the SPDM library.

If I drop the export, I'd have to declare the SPDM library bool.

I'm leaning towards keeping the SPDM library tristate (and keep the
export) to accommodate SCSI, ATA and possibly others.

Please let me know if you disagree.

Thanks,

Lukas

2024-03-04 19:51:54

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH 03/12] X.509: Move certificate length retrieval into new helper

Lukas Wunner wrote:
> On Fri, Oct 06, 2023 at 12:15:13PM -0700, Dan Williams wrote:
> > Lukas Wunner wrote:
> > > The upcoming in-kernel SPDM library (Security Protocol and Data Model,
> > > https://www.dmtf.org/dsp/DSP0274) needs to retrieve the length from
> > > ASN.1 DER-encoded X.509 certificates.
> > >
> > > Such code already exists in x509_load_certificate_list(), so move it
> > > into a new helper for reuse by SPDM.
> [...]
> > > +EXPORT_SYMBOL_GPL(x509_get_certificate_length);
> >
> > Given CONFIG_PCI is a bool, is the export needed? Maybe save this export
> > until the modular consumer arrives, or identify the modular consumer in the
> > changelog?
>
> The x509_get_certificate_length() helper introduced by this patch
> isn't needed directly by the PCI core, but by the SPDM library.
>
> The SPDM library is tristate and is selected by CONFIG_PCI_CMA,
> which is indeed bool.
>
> However SCSI and ATA (both tristate) have explicitly expressed an
> interest to use the SPDM library.
>
> If I drop the export, I'd have to declare the SPDM library bool.
>
> I'm leaning towards keeping the SPDM library tristate (and keep the
> export) to accommodate SCSI, ATA and possibly others.
>
> Please let me know if you disagree.

Oh, missed that the SPDM library is the first modular consumer. Looks
good to me.

2024-03-20 08:33:41

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 07/12] spdm: Introduce library to authenticate devices

On Fri, Feb 09, 2024 at 09:32:04PM +0100, Lukas Wunner wrote:
> On Tue, Oct 03, 2023 at 01:35:26PM +0300, Ilpo J?rvinen wrote:
> > On Thu, 28 Sep 2023, Lukas Wunner wrote:
> > > + spdm_state->responder_caps = le32_to_cpu(rsp->flags);
> >
> > Earlier, unaligned accessors where used with the version_number_entries.
> > Is it intentional they're not used here (I cannot see what would be
> > reason for this difference)?
>
> Thanks, good catch. Indeed this is not necessarily naturally aligned
> because the GET_CAPABILITIES request and response succeeds the
> GET_VERSION response in the same allocation. And the GET_VERSION
> response size is a multiple of 2, but not always a multiple of 4.

Actually, scratch that.

I've realized that since all the SPDM request/response structs are
declared __packed, the alignment requirement for the struct members
becomes 1 byte and hence they're automatically accessed byte-wise on
arches which require that:

https://stackoverflow.com/questions/73152859/accessing-unaligned-struct-member-using-pointers#73154825

E.g. this line...

req->data_transfer_size = cpu_to_le32(spdm_state->transport_sz);

...becomes this on arm 32-bit (multi_v4t_defconfig)...

ldr r3, [r5, #0x1c] ; load spdm_state->transport_sz into r3
lsr r2, r3, lsr #8 ; right-shift r3 into r2 by 8 bits
strb r3, [r7, #0xc] ; copy lowest byte from r3 into request
strb r2, [r7, #0xd] ; copy next byte from r2 into request
lsr r2, r3, lsr #16 ; right-shift r3 into r2 by 16 bits
lsr r3, r3, lsr #24 ; right-shift r3 into r3 by 24 bits
strb r2, [r7, #0xe] ; copy next byte from r2 into request
strb r3, [r7, #0xf] ; copy next byte from r3 into request

...and it becomes this on x64_64, which has no alignment requirements:

mov eax, dword [r15+0x40] ; load spdm_state->transport_sz
mov dword [r12+0xc], eax ; copy into request

So for __packed structs, get_unaligned_*() / put_unaligned_*() accessors
are not necessary and I will drop them when respinning.

Thanks,

Lukas