2022-11-03 18:20:31

by Evan Green

[permalink] [raw]
Subject: [PATCH v4 00/11] Encrypted Hibernation

We are exploring enabling hibernation in some new scenarios. However,
our security team has a few requirements, listed below:
1. The hibernate image must be encrypted with protection derived from
both the platform (eg TPM) and user authentication data (eg
password).
2. Hibernation must not be a vector by which a malicious userspace can
escalate to the kernel.

Requirement #1 can be achieved solely with uswsusp, however requirement
2 necessitates mechanisms in the kernel to guarantee integrity of the
hibernate image. The kernel needs a way to authenticate that it generated
the hibernate image being loaded, and that the image has not been tampered
with. Adding support for in-kernel AEAD encryption with a TPM-sealed key
allows us to achieve both requirements with a single computation pass.

Matthew Garrett published a series [1] that aligns closely with this
goal. His series utilized the fact that PCR23 is a resettable PCR that
can be blocked from access by usermode. The TPM can create a sealed key
tied to PCR23 in two ways. First, the TPM can attest to the value of
PCR23 when the key was created, which the kernel can use on resume to
verify that the kernel must have created the key (since it is the only
one capable of modifying PCR23). It can also create a policy that enforces
PCR23 be set to a specific value as a condition of unsealing the key,
preventing usermode from unsealing the key by talking directly to the
TPM.

This series adopts that primitive as a foundation, tweaking and building
on it a bit. Where Matthew's series used the TPM-backed key to encrypt a
hash of the image, this series uses the key directly as a gcm(aes)
encryption key, which the kernel uses to encrypt and decrypt the
hibernate image in chunks of 16 pages. This provides both encryption and
integrity, which turns out to be a noticeable performance improvement over
separate passes for encryption and hashing.

The series also introduces the concept of mixing user key material into
the encryption key. This allows usermode to introduce key material
based on unspecified external authentication data (in our case derived
from something like the user password or PIN), without requiring
usermode to do a separate encryption pass.

Matthew also documented issues his series had [2] related to generating
fake images by booting alternate kernels without the PCR23 limiting.
With access to PCR23 on the same machine, usermode can create fake
hibernate images that are indistinguishable to the new kernel from
genuine ones. His post outlines a solution that involves adding more
PCRs into the creation data and policy, with some gyrations to make this
work well on a standard PC.

Our approach would be similar: on our machines PCR 0 indicates whether
the system is booted in secure/verified mode or developer mode. By
adding PCR0 to the policy, we can reject hibernate images made in
developer mode while in verified mode (or vice versa).

Additionally, mixing in the user authentication data limits both
data exfiltration attacks (eg a stolen laptop) and forged hibernation
image attacks to attackers that already know the authentication data (eg
user's password). This, combined with our relatively sealed userspace
(dm-verity on the rootfs), and some judicious clearing of the hibernate
image (such as across an OS update) further reduce the risk of an online
attack. The remaining attack space of a forgery from someone with
physical access to the device and knowledge of the authentication data
is out of scope for us, given that flipping to developer mode or
reflashing RO firmware trivially achieves the same thing.

A couple of patches still need to be written on top of this series. The
generalized functionality to OR in additional PCRs via Kconfig (like PCR
0 or 5) still needs to be added. We'll also need a patch that disallows
unencrypted forms of resume from hibernation, to fully close the door
to malicious userspace. However, I wanted to get this series out first
and get reactions from upstream before continuing to add to it.

[1] https://patchwork.kernel.org/project/linux-pm/cover/[email protected]/
[2] https://mjg59.dreamwidth.org/58077.html

Changes in v4:
- Open code tpm2_pcr_reset implementation in tpm-interface.c (Jarkko)
- Rename interface symbol to tpm2_pcr_reset, fix kerneldocs (Jarkko)
- Augment the commit message (Jarkko)
- Local ordering and whitespace changes (Jarkko)
- s/tpm_pcr_reset/tpm2_pcr_reset/ due to change in other patch
- Variable ordering and whitespace fixes (Jarkko)
- Add NULL check explanation in teardown (Jarkko)
- Change strlen+1 to sizeof for static buffer (Jarkko)
- Fix nr_allocated_banks loop overflow (found via KASAN)
- Local variable reordering (Jarkko)
- Local variable ordering (Jarkko)

Changes in v3:
- Unify tpm1/2_pcr_reset prototypes (Jarkko)
- Wait no, remove the TPM1 stuff altogether (Jarkko)
- Remove extra From tag and blank in commit msg (Jarkko).
- Split find_and_validate_cc() export to its own patch (Jarkko)
- Rename tpm_find_and_validate_cc() to tpm2_find_and_validate_cc().
- Fix up commit message (Jarkko)
- tpm2_find_and_validate_cc() was split (Jarkko)
- Simply fully restrict TPM1 since v2 failed to account for tunnelled
transport sessions (Stefan and Jarkko).
- Fix SoB and -- note ordering (Kees)
- Add comments describing the TPM2 spec type names for the new fields
in tpm2key.asn1 (Kees)
- Add len buffer checks in tpm2_key_encode() (Kees)
- Clarified creationpcrs documentation (Ben)
- Changed funky tag to suggested-by (Kees). Matthew, holler if you want
something different.
- ENCRYPTED_HIBERNATION needs TRUSTED_KEYS builtin for
key_type_trusted.
- Remove KEYS dependency since it's covered by TRUSTED_KEYS (Kees)
- Changed funky tag to Co-developed-by (Kees). Matthew, holler if you
want something different.
- Changed funky tag to Co-developed-by (Kees)

Changes in v2:
- Fixed sparse warnings
- Adjust hash len by 2 due to new ASN.1 storage, and add underflow
check.
- Rework load/create_kernel_key() to eliminate a label (Andrey)
- Call put_device() needed from calling tpm_default_chip().
- Add missing static on snapshot_encrypted_byte_count()
- Fold in only the used kernel key bytes to the user key.
- Make the user key length 32 (Eric)
- Use CRYPTO_LIB_SHA256 for less boilerplate (Eric)
- Fixed some sparse warnings
- Use CRYPTO_LIB_SHA256 to get rid of sha256_data() (Eric)
- Adjusted offsets due to new ASN.1 format, and added a creation data
length check.
- Fix sparse warnings
- Fix session type comment (Andrey)
- Eliminate extra label in get/create_kernel_key() (Andrey)
- Call tpm_try_get_ops() before calling tpm2_flush_context().

Evan Green (8):
tpm: Export and rename tpm2_find_and_validate_cc()
security: keys: trusted: Include TPM2 creation data
security: keys: trusted: Verify creation data
PM: hibernate: Add kernel-based encryption
PM: hibernate: Use TPM-backed keys to encrypt image
PM: hibernate: Mix user key in encrypted hibernate
PM: hibernate: Verify the digest encryption key
PM: hibernate: seal the encryption key with a PCR policy

Matthew Garrett (3):
tpm: Add support for in-kernel resetting of PCRs
tpm: Allow PCR 23 to be restricted to kernel-only use
security: keys: trusted: Allow storage of PCR values in creation data

Documentation/power/userland-swsusp.rst | 8 +
.../security/keys/trusted-encrypted.rst | 6 +
drivers/char/tpm/Kconfig | 12 +
drivers/char/tpm/tpm-dev-common.c | 8 +
drivers/char/tpm/tpm-interface.c | 47 +
drivers/char/tpm/tpm.h | 22 +
drivers/char/tpm/tpm1-cmd.c | 13 +
drivers/char/tpm/tpm2-cmd.c | 29 +-
drivers/char/tpm/tpm2-space.c | 8 +-
include/keys/trusted-type.h | 9 +
include/linux/tpm.h | 19 +
include/uapi/linux/suspend_ioctls.h | 28 +-
kernel/power/Kconfig | 15 +
kernel/power/Makefile | 1 +
kernel/power/power.h | 1 +
kernel/power/snapenc.c | 1043 +++++++++++++++++
kernel/power/snapshot.c | 5 +
kernel/power/user.c | 44 +-
kernel/power/user.h | 116 ++
security/keys/trusted-keys/tpm2key.asn1 | 15 +-
security/keys/trusted-keys/trusted_tpm1.c | 9 +
security/keys/trusted-keys/trusted_tpm2.c | 318 ++++-
22 files changed, 1724 insertions(+), 52 deletions(-)
create mode 100644 kernel/power/snapenc.c
create mode 100644 kernel/power/user.h

--
2.38.1.431.g37b22c650d-goog



2022-11-03 18:21:20

by Evan Green

[permalink] [raw]
Subject: [PATCH v4 10/11] PM: hibernate: Verify the digest encryption key

We want to ensure that the key used to encrypt the digest was created by
the kernel during hibernation. To do this we request that the TPM
include information about the value of PCR 23 at the time of key
creation in the sealed blob. On resume, we can make sure that the PCR
information in the creation data blob (already certified by the TPM to
be accurate) corresponds to the expected value. Since only
the kernel can touch PCR 23, if an attacker generates a key themselves
the value of PCR 23 will have been different, allowing us to reject the
key and boot normally instead of resuming.

Co-developed-by: Matthew Garrett <[email protected]>
Signed-off-by: Matthew Garrett <[email protected]>
Signed-off-by: Evan Green <[email protected]>

---
Matthew's original version of this patch is here:
https://patchwork.kernel.org/project/linux-pm/patch/[email protected]/

I moved the TPM2_CC_CERTIFYCREATION code into a separate change in the
trusted key code because the blob_handle was being flushed and was no
longer valid for use in CC_CERTIFYCREATION after the key was loaded. As
an added benefit of moving the certification into the trusted keys code,
we can drop the other patch from the original series that squirrelled
the blob_handle away.

Changes in v4:
- Local variable reordering (Jarkko)

Changes in v3:
- Changed funky tag to Co-developed-by (Kees). Matthew, holler if you
want something different.

Changes in v2:
- Fixed some sparse warnings
- Use CRYPTO_LIB_SHA256 to get rid of sha256_data() (Eric)
- Adjusted offsets due to new ASN.1 format, and added a creation data
length check.

kernel/power/snapenc.c | 67 ++++++++++++++++++++++++++++++++++++++++--
1 file changed, 65 insertions(+), 2 deletions(-)

diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c
index 50167a37c5bf23..2f421061498246 100644
--- a/kernel/power/snapenc.c
+++ b/kernel/power/snapenc.c
@@ -22,6 +22,12 @@ static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256,
0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05,
0x5f, 0x49}};

+/* sha256(sha256(empty_pcr | known_digest)) */
+static const char expected_digest[] = {0x2f, 0x96, 0xf2, 0x1b, 0x70, 0xa9, 0xe8,
+ 0x42, 0x25, 0x8e, 0x66, 0x07, 0xbe, 0xbc, 0xe3, 0x1f, 0x2c, 0x84, 0x4a,
+ 0x3f, 0x85, 0x17, 0x31, 0x47, 0x9a, 0xa5, 0x53, 0xbb, 0x23, 0x0c, 0x32,
+ 0xf3};
+
/* Derive a key from the kernel and user keys for data encryption. */
static int snapshot_use_user_key(struct snapshot_data *data)
{
@@ -486,7 +492,7 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data)
static int snapshot_create_kernel_key(struct snapshot_data *data)
{
/* Create a key sealed by the SRK. */
- char *keyinfo = "new\t32\tkeyhandle=0x81000000";
+ char *keyinfo = "new\t32\tkeyhandle=0x81000000\tcreationpcrs=0x00800000";
const struct cred *cred = current_cred();
struct tpm_digest *digests = NULL;
struct key *key = NULL;
@@ -613,6 +619,8 @@ static int snapshot_load_kernel_key(struct snapshot_data *data,

char *keytemplate = "load\t%s\tkeyhandle=0x81000000";
const struct cred *cred = current_cred();
+ struct trusted_key_payload *payload;
+ char certhash[SHA256_DIGEST_SIZE];
struct tpm_digest *digests = NULL;
char *blobstring = NULL;
struct key *key = NULL;
@@ -635,8 +643,10 @@ static int snapshot_load_kernel_key(struct snapshot_data *data,

digests = kcalloc(chip->nr_allocated_banks, sizeof(struct tpm_digest),
GFP_KERNEL);
- if (!digests)
+ if (!digests) {
+ ret = -ENOMEM;
goto out;
+ }

for (i = 0; i < chip->nr_allocated_banks; i++) {
digests[i].alg_id = chip->allocated_banks[i].alg_id;
@@ -676,6 +686,59 @@ static int snapshot_load_kernel_key(struct snapshot_data *data,
if (ret != 0)
goto out;

+ /* Verify the creation hash matches the creation data. */
+ payload = key->payload.data[0];
+ if (!payload->creation || !payload->creation_hash ||
+ (payload->creation_len < 3) ||
+ (payload->creation_hash_len < SHA256_DIGEST_SIZE)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ sha256(payload->creation + 2, payload->creation_len - 2, certhash);
+ if (memcmp(payload->creation_hash + 2, certhash, SHA256_DIGEST_SIZE) != 0) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /* We now know that the creation data is authentic - parse it */
+
+ /* TPML_PCR_SELECTION.count */
+ if (be32_to_cpu(*(__be32 *)&payload->creation[2]) != 1) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (be16_to_cpu(*(__be16 *)&payload->creation[6]) != TPM_ALG_SHA256) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (*(char *)&payload->creation[8] != 3) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /* PCR 23 selected */
+ if (be32_to_cpu(*(__be32 *)&payload->creation[8]) != 0x03000080) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (be16_to_cpu(*(__be16 *)&payload->creation[12]) !=
+ SHA256_DIGEST_SIZE) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /* Verify PCR 23 contained the expected value when the key was created. */
+ if (memcmp(&payload->creation[14], expected_digest,
+ SHA256_DIGEST_SIZE) != 0) {
+
+ ret = -EINVAL;
+ goto out;
+ }
+
data->key = key;
key = NULL;

--
2.38.1.431.g37b22c650d-goog


2022-11-03 18:22:07

by Evan Green

[permalink] [raw]
Subject: [PATCH v4 01/11] tpm: Add support for in-kernel resetting of PCRs

From: Matthew Garrett <[email protected]>

Add an internal command for resetting a PCR. This will be used by the
encrypted hibernation code to set PCR23 to a known value. The
hibernation code will seal the hibernation key with a policy specifying
PCR23 be set to this known value as a mechanism to ensure that the
hibernation key is genuine. But to do this repeatedly, resetting the PCR
is necessary as well.

Link: https://lore.kernel.org/lkml/[email protected]/
Signed-off-by: Matthew Garrett <[email protected]>
Signed-off-by: Evan Green <[email protected]>
---

Changes in v4:
- Open code tpm2_pcr_reset implementation in tpm-interface.c (Jarkko)
- Rename interface symbol to tpm2_pcr_reset, fix kerneldocs (Jarkko)

Changes in v3:
- Unify tpm1/2_pcr_reset prototypes (Jarkko)
- Wait no, remove the TPM1 stuff altogether (Jarkko)
- Remove extra From tag and blank in commit msg (Jarkko).

drivers/char/tpm/tpm-interface.c | 47 ++++++++++++++++++++++++++++++++
drivers/char/tpm/tpm2-cmd.c | 7 -----
include/linux/tpm.h | 14 ++++++++++
3 files changed, 61 insertions(+), 7 deletions(-)

diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
index 1621ce8187052c..886277b2654e3b 100644
--- a/drivers/char/tpm/tpm-interface.c
+++ b/drivers/char/tpm/tpm-interface.c
@@ -342,6 +342,53 @@ int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx,
}
EXPORT_SYMBOL_GPL(tpm_pcr_extend);

+/**
+ * tpm2_pcr_reset - Reset the specified PCR
+ * @chip: A &struct tpm_chip instance, %NULL for the default chip
+ * @pcr_idx: The PCR to be reset
+ *
+ * Return: Same as with tpm_transmit_cmd(), or ENOTTY for TPM1 devices.
+ */
+int tpm2_pcr_reset(struct tpm_chip *chip, u32 pcr_idx)
+{
+ struct tpm2_null_auth_area auth_area;
+ struct tpm_buf buf;
+ int rc;
+
+ chip = tpm_find_get_ops(chip);
+ if (!chip)
+ return -ENODEV;
+
+ if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
+ rc = -ENOTTY;
+ goto out;
+ }
+
+ rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS, TPM2_CC_PCR_RESET);
+ if (rc)
+ goto out;
+
+ tpm_buf_append_u32(&buf, pcr_idx);
+
+ auth_area.handle = cpu_to_be32(TPM2_RS_PW);
+ auth_area.nonce_size = 0;
+ auth_area.attributes = 0;
+ auth_area.auth_size = 0;
+
+ tpm_buf_append_u32(&buf, sizeof(struct tpm2_null_auth_area));
+ tpm_buf_append(&buf, (const unsigned char *)&auth_area,
+ sizeof(auth_area));
+
+ rc = tpm_transmit_cmd(chip, &buf, 0, "attempting to reset a PCR");
+
+ tpm_buf_destroy(&buf);
+
+out:
+ tpm_put_ops(chip);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(tpm2_pcr_reset);
+
/**
* tpm_send - send a TPM command
* @chip: a &struct tpm_chip instance, %NULL for the default chip
diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
index 65d03867e114c5..303ce2ea02a4b0 100644
--- a/drivers/char/tpm/tpm2-cmd.c
+++ b/drivers/char/tpm/tpm2-cmd.c
@@ -216,13 +216,6 @@ int tpm2_pcr_read(struct tpm_chip *chip, u32 pcr_idx,
return rc;
}

-struct tpm2_null_auth_area {
- __be32 handle;
- __be16 nonce_size;
- u8 attributes;
- __be16 auth_size;
-} __packed;
-
/**
* tpm2_pcr_extend() - extend a PCR value
*
diff --git a/include/linux/tpm.h b/include/linux/tpm.h
index dfeb25a0362dee..70134e6551745f 100644
--- a/include/linux/tpm.h
+++ b/include/linux/tpm.h
@@ -219,6 +219,7 @@ enum tpm2_command_codes {
TPM2_CC_HIERARCHY_CONTROL = 0x0121,
TPM2_CC_HIERARCHY_CHANGE_AUTH = 0x0129,
TPM2_CC_CREATE_PRIMARY = 0x0131,
+ TPM2_CC_PCR_RESET = 0x013D,
TPM2_CC_SEQUENCE_COMPLETE = 0x013E,
TPM2_CC_SELF_TEST = 0x0143,
TPM2_CC_STARTUP = 0x0144,
@@ -293,6 +294,13 @@ struct tpm_header {
};
} __packed;

+struct tpm2_null_auth_area {
+ __be32 handle;
+ __be16 nonce_size;
+ u8 attributes;
+ __be16 auth_size;
+} __packed;
+
/* A string buffer type for constructing TPM commands. This is based on the
* ideas of string buffer code in security/keys/trusted.h but is heap based
* in order to keep the stack usage minimal.
@@ -423,6 +431,7 @@ extern ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_buf *buf,
size_t min_rsp_body_length, const char *desc);
extern int tpm_pcr_read(struct tpm_chip *chip, u32 pcr_idx,
struct tpm_digest *digest);
+extern int tpm2_pcr_reset(struct tpm_chip *chip, u32 pcr_idx);
extern int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx,
struct tpm_digest *digests);
extern int tpm_send(struct tpm_chip *chip, void *cmd, size_t buflen);
@@ -440,6 +449,11 @@ static inline int tpm_pcr_read(struct tpm_chip *chip, int pcr_idx,
return -ENODEV;
}

+static inline int tpm2_pcr_reset(struct tpm_chip *chip, int pcr_idx)
+{
+ return -ENODEV;
+}
+
static inline int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx,
struct tpm_digest *digests)
{
--
2.38.1.431.g37b22c650d-goog


2022-11-03 18:25:26

by Evan Green

[permalink] [raw]
Subject: [PATCH v4 08/11] PM: hibernate: Use TPM-backed keys to encrypt image

When using encrypted hibernate images, have the TPM create a key for us
and seal it. By handing back a sealed blob instead of the raw key, we
prevent usermode from being able to decrypt and tamper with the
hibernate image on a different machine.

We'll also go through the motions of having PCR23 set to a known value at
the time of key creation and unsealing. Currently there's nothing that
enforces the contents of PCR23 as a condition to unseal the key blob,
that will come in a later change.

Sourced-from: Matthew Garrett <[email protected]>
Signed-off-by: Evan Green <[email protected]>

---
Matthew's incarnation of this patch is at:
https://patchwork.kernel.org/project/linux-pm/patch/[email protected]/

Changes in v4:
- s/tpm_pcr_reset/tpm2_pcr_reset/ due to change in other patch
- Variable ordering and whitespace fixes (Jarkko)
- Add NULL check explanation in teardown (Jarkko)
- Change strlen+1 to sizeof for static buffer (Jarkko)
- Fix nr_allocated_banks loop overflow (found via KASAN)

Changes in v3:
- ENCRYPTED_HIBERNATION needs TRUSTED_KEYS builtin for
key_type_trusted.
- Remove KEYS dependency since it's covered by TRUSTED_KEYS (Kees)

Changes in v2:
- Rework load/create_kernel_key() to eliminate a label (Andrey)
- Call put_device() needed from calling tpm_default_chip().

kernel/power/Kconfig | 1 +
kernel/power/snapenc.c | 211 +++++++++++++++++++++++++++++++++++++++--
kernel/power/user.h | 1 +
3 files changed, 204 insertions(+), 9 deletions(-)

diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
index cd574af0b43379..2f8acbd87b34dc 100644
--- a/kernel/power/Kconfig
+++ b/kernel/power/Kconfig
@@ -96,6 +96,7 @@ config ENCRYPTED_HIBERNATION
bool "Encryption support for userspace snapshots"
depends on HIBERNATION_SNAPSHOT_DEV
depends on CRYPTO_AEAD2=y
+ depends on TRUSTED_KEYS=y
default n
help
Enable support for kernel-based encryption of hibernation snapshots
diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c
index f215df16dad4d3..7ff4fc66f7500c 100644
--- a/kernel/power/snapenc.c
+++ b/kernel/power/snapenc.c
@@ -4,13 +4,23 @@
#include <linux/crypto.h>
#include <crypto/aead.h>
#include <crypto/gcm.h>
+#include <keys/trusted-type.h>
+#include <linux/key-type.h>
#include <linux/random.h>
#include <linux/mm.h>
+#include <linux/tpm.h>
#include <linux/uaccess.h>

#include "power.h"
#include "user.h"

+/* sha256("To sleep, perchance to dream") */
+static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256,
+ .digest = {0x92, 0x78, 0x3d, 0x79, 0x2d, 0x00, 0x31, 0xb0, 0x55, 0xf9,
+ 0x1e, 0x0d, 0xce, 0x83, 0xde, 0x1d, 0xc4, 0xc5, 0x8e, 0x8c,
+ 0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05,
+ 0x5f, 0x49}};
+
/* Encrypt more data from the snapshot into the staging area. */
static int snapshot_encrypt_refill(struct snapshot_data *data)
{
@@ -314,6 +324,16 @@ void snapshot_teardown_encryption(struct snapshot_data *data)
{
int i;

+ /*
+ * Do NULL checks so this function can safely be called from error paths
+ * and other places where this context may not be fully set up.
+ */
+ if (data->key) {
+ key_revoke(data->key);
+ key_put(data->key);
+ data->key = NULL;
+ }
+
if (data->aead_req) {
aead_request_free(data->aead_req);
data->aead_req = NULL;
@@ -382,10 +402,82 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data)
return rc;
}

+static int snapshot_create_kernel_key(struct snapshot_data *data)
+{
+ /* Create a key sealed by the SRK. */
+ char *keyinfo = "new\t32\tkeyhandle=0x81000000";
+ const struct cred *cred = current_cred();
+ struct tpm_digest *digests = NULL;
+ struct key *key = NULL;
+ struct tpm_chip *chip;
+ int ret, i;
+
+ chip = tpm_default_chip();
+ if (!chip)
+ return -ENODEV;
+
+ if (!(tpm_is_tpm2(chip))) {
+ ret = -ENODEV;
+ goto out_dev;
+ }
+
+ ret = tpm2_pcr_reset(chip, 23);
+ if (ret)
+ goto out;
+
+ digests = kcalloc(chip->nr_allocated_banks, sizeof(struct tpm_digest),
+ GFP_KERNEL);
+ if (!digests) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ for (i = 0; i < chip->nr_allocated_banks; i++) {
+ digests[i].alg_id = chip->allocated_banks[i].alg_id;
+ if (digests[i].alg_id == known_digest.alg_id)
+ memcpy(&digests[i], &known_digest, sizeof(known_digest));
+ }
+
+ ret = tpm_pcr_extend(chip, 23, digests);
+ if (ret != 0)
+ goto out;
+
+ key = key_alloc(&key_type_trusted, "swsusp", GLOBAL_ROOT_UID,
+ GLOBAL_ROOT_GID, cred, 0, KEY_ALLOC_NOT_IN_QUOTA,
+ NULL);
+
+ if (IS_ERR(key)) {
+ ret = PTR_ERR(key);
+ key = NULL;
+ goto out;
+ }
+
+ ret = key_instantiate_and_link(key, keyinfo, sizeof(keyinfo), NULL,
+ NULL);
+ if (ret != 0)
+ goto out;
+
+ data->key = key;
+ key = NULL;
+
+out:
+ if (key) {
+ key_revoke(key);
+ key_put(key);
+ }
+
+ kfree(digests);
+ tpm2_pcr_reset(chip, 23);
+
+out_dev:
+ put_device(&chip->dev);
+ return ret;
+}
+
int snapshot_get_encryption_key(struct snapshot_data *data,
struct uswsusp_key_blob __user *key)
{
- u8 aead_key[SNAPSHOT_ENCRYPTION_KEY_SIZE];
+ struct trusted_key_payload *payload;
u8 nonce[USWSUSP_KEY_NONCE_SIZE];
int rc;

@@ -401,21 +493,28 @@ int snapshot_get_encryption_key(struct snapshot_data *data,
get_random_bytes(nonce, sizeof(nonce));
memcpy(&data->nonce_low, &nonce[0], sizeof(data->nonce_low));
memcpy(&data->nonce_high, &nonce[8], sizeof(data->nonce_high));
- /* Build a random key */
- get_random_bytes(aead_key, sizeof(aead_key));
- rc = crypto_aead_setkey(data->aead_tfm, aead_key, sizeof(aead_key));
+
+ /* Create a kernel key, and set it. */
+ rc = snapshot_create_kernel_key(data);
+ if (rc)
+ goto fail;
+
+ payload = data->key->payload.data[0];
+ /* Install the key */
+ rc = crypto_aead_setkey(data->aead_tfm, payload->key, SNAPSHOT_ENCRYPTION_KEY_SIZE);
if (rc)
goto fail;

- /* Hand the key back to user mode (to be changed!) */
- rc = put_user(sizeof(struct uswsusp_key_blob), &key->blob_len);
+ /* Hand the key back to user mode in sealed form. */
+ rc = put_user(payload->blob_len, &key->blob_len);
if (rc)
goto fail;

- rc = copy_to_user(&key->blob, &aead_key, sizeof(aead_key));
+ rc = copy_to_user(&key->blob, &payload->blob, payload->blob_len);
if (rc)
goto fail;

+ /* The nonce just gets handed back in the clear. */
rc = copy_to_user(&key->nonce, &nonce, sizeof(nonce));
if (rc)
goto fail;
@@ -427,10 +526,99 @@ int snapshot_get_encryption_key(struct snapshot_data *data,
return rc;
}

+static int snapshot_load_kernel_key(struct snapshot_data *data,
+ struct uswsusp_key_blob *blob)
+{
+
+ char *keytemplate = "load\t%s\tkeyhandle=0x81000000";
+ const struct cred *cred = current_cred();
+ struct tpm_digest *digests = NULL;
+ char *blobstring = NULL;
+ struct key *key = NULL;
+ struct tpm_chip *chip;
+ char *keyinfo = NULL;
+ int i, ret;
+
+ chip = tpm_default_chip();
+ if (!chip)
+ return -ENODEV;
+
+ if (!(tpm_is_tpm2(chip))) {
+ ret = -ENODEV;
+ goto out_dev;
+ }
+
+ ret = tpm2_pcr_reset(chip, 23);
+ if (ret)
+ goto out;
+
+ digests = kcalloc(chip->nr_allocated_banks, sizeof(struct tpm_digest),
+ GFP_KERNEL);
+ if (!digests)
+ goto out;
+
+ for (i = 0; i < chip->nr_allocated_banks; i++) {
+ digests[i].alg_id = chip->allocated_banks[i].alg_id;
+ if (digests[i].alg_id == known_digest.alg_id)
+ memcpy(&digests[i], &known_digest, sizeof(known_digest));
+ }
+
+ ret = tpm_pcr_extend(chip, 23, digests);
+ if (ret != 0)
+ goto out;
+
+ blobstring = kmalloc(blob->blob_len * 2, GFP_KERNEL);
+ if (!blobstring) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ bin2hex(blobstring, blob->blob, blob->blob_len);
+ keyinfo = kasprintf(GFP_KERNEL, keytemplate, blobstring);
+ if (!keyinfo) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ key = key_alloc(&key_type_trusted, "swsusp", GLOBAL_ROOT_UID,
+ GLOBAL_ROOT_GID, cred, 0, KEY_ALLOC_NOT_IN_QUOTA,
+ NULL);
+
+ if (IS_ERR(key)) {
+ ret = PTR_ERR(key);
+ key = NULL;
+ goto out;
+ }
+
+ ret = key_instantiate_and_link(key, keyinfo, strlen(keyinfo) + 1, NULL,
+ NULL);
+ if (ret != 0)
+ goto out;
+
+ data->key = key;
+ key = NULL;
+
+out:
+ if (key) {
+ key_revoke(key);
+ key_put(key);
+ }
+
+ kfree(keyinfo);
+ kfree(blobstring);
+ kfree(digests);
+ tpm2_pcr_reset(chip, 23);
+
+out_dev:
+ put_device(&chip->dev);
+ return ret;
+}
+
int snapshot_set_encryption_key(struct snapshot_data *data,
struct uswsusp_key_blob __user *key)
{
struct uswsusp_key_blob blob;
+ struct trusted_key_payload *payload;
int rc;

/* It's too late if data's been pushed in. */
@@ -446,13 +634,18 @@ int snapshot_set_encryption_key(struct snapshot_data *data,
if (rc)
goto crypto_setup_fail;

- if (blob.blob_len != sizeof(struct uswsusp_key_blob)) {
+ if (blob.blob_len > sizeof(key->blob)) {
rc = -EINVAL;
goto crypto_setup_fail;
}

+ rc = snapshot_load_kernel_key(data, &blob);
+ if (rc)
+ goto crypto_setup_fail;
+
+ payload = data->key->payload.data[0];
rc = crypto_aead_setkey(data->aead_tfm,
- blob.blob,
+ payload->key,
SNAPSHOT_ENCRYPTION_KEY_SIZE);

if (rc)
diff --git a/kernel/power/user.h b/kernel/power/user.h
index ac429782abff85..6c86fb64ebe13e 100644
--- a/kernel/power/user.h
+++ b/kernel/power/user.h
@@ -31,6 +31,7 @@ struct snapshot_data {
uint64_t crypt_total;
uint64_t nonce_low;
uint64_t nonce_high;
+ struct key *key;
#endif

};
--
2.38.1.431.g37b22c650d-goog


2022-11-03 18:39:58

by Evan Green

[permalink] [raw]
Subject: [PATCH v4 06/11] security: keys: trusted: Verify creation data

If a loaded key contains creation data, ask the TPM to verify that
creation data. This allows users like encrypted hibernate to know that
the loaded and parsed creation data has not been tampered with.

Suggested-by: Matthew Garrett <[email protected]>
Signed-off-by: Evan Green <[email protected]>

---
Source material for this change is at:
https://patchwork.kernel.org/project/linux-pm/patch/[email protected]/

(no changes since v3)

Changes in v3:
- Changed funky tag to suggested-by (Kees). Matthew, holler if you want
something different.

Changes in v2:
- Adjust hash len by 2 due to new ASN.1 storage, and add underflow
check.

include/linux/tpm.h | 1 +
security/keys/trusted-keys/trusted_tpm2.c | 77 ++++++++++++++++++++++-
2 files changed, 77 insertions(+), 1 deletion(-)

diff --git a/include/linux/tpm.h b/include/linux/tpm.h
index 70134e6551745f..9c2ee3e30ffa5d 100644
--- a/include/linux/tpm.h
+++ b/include/linux/tpm.h
@@ -224,6 +224,7 @@ enum tpm2_command_codes {
TPM2_CC_SELF_TEST = 0x0143,
TPM2_CC_STARTUP = 0x0144,
TPM2_CC_SHUTDOWN = 0x0145,
+ TPM2_CC_CERTIFYCREATION = 0x014A,
TPM2_CC_NV_READ = 0x014E,
TPM2_CC_CREATE = 0x0153,
TPM2_CC_LOAD = 0x0157,
diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c
index a7ad83bc0e5396..c76a1b5a2e8471 100644
--- a/security/keys/trusted-keys/trusted_tpm2.c
+++ b/security/keys/trusted-keys/trusted_tpm2.c
@@ -703,6 +703,74 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip,
return rc;
}

+/**
+ * tpm2_certify_creation() - execute a TPM2_CertifyCreation command
+ *
+ * @chip: TPM chip to use
+ * @payload: the key data in clear and encrypted form
+ * @blob_handle: the loaded TPM handle of the key
+ *
+ * Return: 0 on success
+ * -EINVAL on tpm error status
+ * < 0 error from tpm_send or tpm_buf_init
+ */
+static int tpm2_certify_creation(struct tpm_chip *chip,
+ struct trusted_key_payload *payload,
+ u32 blob_handle)
+{
+ struct tpm_header *head;
+ struct tpm_buf buf;
+ int rc;
+
+ rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS, TPM2_CC_CERTIFYCREATION);
+ if (rc)
+ return rc;
+
+ /* Use TPM_RH_NULL for signHandle */
+ tpm_buf_append_u32(&buf, 0x40000007);
+
+ /* Object handle */
+ tpm_buf_append_u32(&buf, blob_handle);
+
+ /* Auth */
+ tpm_buf_append_u32(&buf, 9);
+ tpm_buf_append_u32(&buf, TPM2_RS_PW);
+ tpm_buf_append_u16(&buf, 0);
+ tpm_buf_append_u8(&buf, 0);
+ tpm_buf_append_u16(&buf, 0);
+
+ /* Qualifying data */
+ tpm_buf_append_u16(&buf, 0);
+
+ /* Creation data hash */
+ if (payload->creation_hash_len < 2) {
+ rc = -EINVAL;
+ goto out;
+ }
+
+ tpm_buf_append_u16(&buf, payload->creation_hash_len - 2);
+ tpm_buf_append(&buf, payload->creation_hash + 2,
+ payload->creation_hash_len - 2);
+
+ /* signature scheme */
+ tpm_buf_append_u16(&buf, TPM_ALG_NULL);
+
+ /* creation ticket */
+ tpm_buf_append(&buf, payload->tk, payload->tk_len);
+
+ rc = tpm_transmit_cmd(chip, &buf, 6, "certifying creation data");
+ if (rc)
+ goto out;
+
+ head = (struct tpm_header *)buf.data;
+
+ if (be32_to_cpu(head->return_code) != TPM2_RC_SUCCESS)
+ rc = -EINVAL;
+out:
+ tpm_buf_destroy(&buf);
+ return rc;
+}
+
/**
* tpm2_unseal_trusted() - unseal the payload of a trusted key
*
@@ -728,8 +796,15 @@ int tpm2_unseal_trusted(struct tpm_chip *chip,
goto out;

rc = tpm2_unseal_cmd(chip, payload, options, blob_handle);
- tpm2_flush_context(chip, blob_handle);
+ if (rc)
+ goto flush;
+
+ if (payload->creation_len)
+ rc = tpm2_certify_creation(chip, payload, blob_handle);

+
+flush:
+ tpm2_flush_context(chip, blob_handle);
out:
tpm_put_ops(chip);

--
2.38.1.431.g37b22c650d-goog


2022-11-03 18:41:10

by Evan Green

[permalink] [raw]
Subject: [PATCH v4 03/11] tpm: Allow PCR 23 to be restricted to kernel-only use

From: Matthew Garrett <[email protected]>

Introduce a new Kconfig, TCG_TPM_RESTRICT_PCR, which if enabled
restricts usermode's ability to extend or reset PCR 23.

Under certain circumstances it might be desirable to enable the creation
of TPM-backed secrets that are only accessible to the kernel. In an
ideal world this could be achieved by using TPM localities, but these
don't appear to be available on consumer systems. An alternative is to
simply block userland from modifying one of the resettable PCRs, leaving
it available to the kernel. If the kernel ensures that no userland can
access the TPM while it is carrying out work, it can reset PCR 23,
extend it to an arbitrary value, create or load a secret, and then reset
the PCR again. Even if userland somehow obtains the sealed material, it
will be unable to unseal it since PCR 23 will never be in the
appropriate state.

This Kconfig is only properly supported for systems with TPM2 devices.
For systems with TPM1 devices, having this Kconfig enabled completely
restricts usermode's access to the TPM. TPM1 contains support for
tunnelled transports, which usermode could use to smuggle commands
through that this Kconfig is attempting to restrict.

Link: https://lore.kernel.org/lkml/[email protected]/
Signed-off-by: Matthew Garrett <[email protected]>
Signed-off-by: Evan Green <[email protected]>
---

Changes in v4:
- Augment the commit message (Jarkko)

Changes in v3:
- Fix up commit message (Jarkko)
- tpm2_find_and_validate_cc() was split (Jarkko)
- Simply fully restrict TPM1 since v2 failed to account for tunnelled
transport sessions (Stefan and Jarkko).

Changes in v2:
- Fixed sparse warnings

drivers/char/tpm/Kconfig | 12 ++++++++++++
drivers/char/tpm/tpm-dev-common.c | 8 ++++++++
drivers/char/tpm/tpm.h | 19 +++++++++++++++++++
drivers/char/tpm/tpm1-cmd.c | 13 +++++++++++++
drivers/char/tpm/tpm2-cmd.c | 22 ++++++++++++++++++++++
5 files changed, 74 insertions(+)

diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
index 927088b2c3d3f2..c8ed54c66e399a 100644
--- a/drivers/char/tpm/Kconfig
+++ b/drivers/char/tpm/Kconfig
@@ -211,4 +211,16 @@ config TCG_FTPM_TEE
This driver proxies for firmware TPM running in TEE.

source "drivers/char/tpm/st33zp24/Kconfig"
+
+config TCG_TPM_RESTRICT_PCR
+ bool "Restrict userland access to PCR 23"
+ depends on TCG_TPM
+ help
+ If set, block userland from extending or resetting PCR 23. This allows it
+ to be restricted to in-kernel use, preventing userland from being able to
+ make use of data sealed to the TPM by the kernel. This is required for
+ secure hibernation support, but should be left disabled if any userland
+ may require access to PCR23. This is a TPM2-only feature, and if enabled
+ on a TPM1 machine will cause all usermode TPM commands to return EPERM due
+ to the complications introduced by tunnelled sessions in TPM1.2.
endif # TCG_TPM
diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
index dc4c0a0a512903..7a4e618c7d1942 100644
--- a/drivers/char/tpm/tpm-dev-common.c
+++ b/drivers/char/tpm/tpm-dev-common.c
@@ -198,6 +198,14 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf,
priv->response_read = false;
*off = 0;

+ if (priv->chip->flags & TPM_CHIP_FLAG_TPM2)
+ ret = tpm2_cmd_restricted(priv->chip, priv->data_buffer, size);
+ else
+ ret = tpm1_cmd_restricted(priv->chip, priv->data_buffer, size);
+
+ if (ret)
+ goto out;
+
/*
* If in nonblocking mode schedule an async job to send
* the command return the size.
diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
index f1e0f490176f01..c0845e3f9eda17 100644
--- a/drivers/char/tpm/tpm.h
+++ b/drivers/char/tpm/tpm.h
@@ -245,4 +245,23 @@ void tpm_bios_log_setup(struct tpm_chip *chip);
void tpm_bios_log_teardown(struct tpm_chip *chip);
int tpm_dev_common_init(void);
void tpm_dev_common_exit(void);
+
+#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
+#define TPM_RESTRICTED_PCR 23
+
+int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size);
+int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size);
+#else
+static inline int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer,
+ size_t size)
+{
+ return 0;
+}
+
+static inline int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer,
+ size_t size)
+{
+ return 0;
+}
+#endif
#endif
diff --git a/drivers/char/tpm/tpm1-cmd.c b/drivers/char/tpm/tpm1-cmd.c
index cf64c738510529..1869e89215fcb9 100644
--- a/drivers/char/tpm/tpm1-cmd.c
+++ b/drivers/char/tpm/tpm1-cmd.c
@@ -811,3 +811,16 @@ int tpm1_get_pcr_allocation(struct tpm_chip *chip)

return 0;
}
+
+#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
+int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size)
+{
+ /*
+ * Restrict all usermode commands on TPM1.2. Ideally we'd just restrict
+ * TPM_ORD_PCR_EXTEND and TPM_ORD_PCR_RESET, but TPM1.2 also supports
+ * tunnelled transport sessions where the kernel would be unable to filter
+ * commands.
+ */
+ return -EPERM;
+}
+#endif
diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
index 303ce2ea02a4b0..e0503cfd7bcfee 100644
--- a/drivers/char/tpm/tpm2-cmd.c
+++ b/drivers/char/tpm/tpm2-cmd.c
@@ -778,3 +778,25 @@ int tpm2_find_cc(struct tpm_chip *chip, u32 cc)

return -1;
}
+
+#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
+int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size)
+{
+ int cc = tpm2_find_and_validate_cc(chip, NULL, buffer, size);
+ __be32 *handle;
+
+ switch (cc) {
+ case TPM2_CC_PCR_EXTEND:
+ case TPM2_CC_PCR_RESET:
+ if (size < (TPM_HEADER_SIZE + sizeof(u32)))
+ return -EINVAL;
+
+ handle = (__be32 *)&buffer[TPM_HEADER_SIZE];
+ if (be32_to_cpu(*handle) == TPM_RESTRICTED_PCR)
+ return -EPERM;
+ break;
+ }
+
+ return 0;
+}
+#endif
--
2.38.1.431.g37b22c650d-goog


2022-11-03 18:45:28

by Evan Green

[permalink] [raw]
Subject: [PATCH v4 09/11] PM: hibernate: Mix user key in encrypted hibernate

Usermode may have their own data protection requirements when it comes
to encrypting the hibernate image. For example, users may want a policy
where the hibernate image is protected by a key derived both from
platform-level security as well as authentication data (such as a
password or PIN). This way, even if the platform is compromised (ie a
stolen laptop), sensitive data cannot be exfiltrated via the hibernate
image without additional data (like the user's password).

The kernel is already doing the encryption, but will be protecting its
key with the TPM alone. Allow usermode to mix in key content of their own
for the data portion of the hibernate image, so that the image
encryption key is determined both by a TPM-backed secret and
user-defined data.

To mix the user key in, we hash the kernel key followed by the user key,
and use the resulting hash as the new key. This allows usermode to mix
in its key material without giving it too much control over what key is
actually driving the encryption (which might be used to attack the
secret kernel key).

Limiting this to the data portion allows the kernel to receive the page
map and prepare its giant allocation even if this user key is not yet
available (ie the user has not yet finished typing in their password).
Once the user key becomes available, the data portion can be pushed
through to the kernel as well. This enables "preloading" scenarios,
where the hibernate image is loaded off of disk while the additional
key material (eg password) is being collected.

One annoyance of the "preloading" scheme is that hibernate image memory
is effectively double-allocated: first by the usermode process pulling
encrypted contents off of disk and holding it, and second by the kernel
in its giant allocation in prepare_image(). An interesting future
optimization would be to allow the kernel to accept and store encrypted
page data before the user key is available. This would remove the
double allocation problem, as usermode could push the encrypted pages
loaded from disk immediately without storing them. The kernel could defer
decryption of the data until the user key is available, while still
knowing the correct page locations to store the encrypted data in.

Signed-off-by: Evan Green <[email protected]>
---

(no changes since v2)

Changes in v2:
- Add missing static on snapshot_encrypted_byte_count()
- Fold in only the used kernel key bytes to the user key.
- Make the user key length 32 (Eric)
- Use CRYPTO_LIB_SHA256 for less boilerplate (Eric)

include/uapi/linux/suspend_ioctls.h | 15 ++-
kernel/power/Kconfig | 1 +
kernel/power/power.h | 1 +
kernel/power/snapenc.c | 158 ++++++++++++++++++++++++++--
kernel/power/snapshot.c | 5 +
kernel/power/user.c | 4 +
kernel/power/user.h | 12 +++
7 files changed, 185 insertions(+), 11 deletions(-)

diff --git a/include/uapi/linux/suspend_ioctls.h b/include/uapi/linux/suspend_ioctls.h
index b73026ef824bb9..f93a22eac52dc2 100644
--- a/include/uapi/linux/suspend_ioctls.h
+++ b/include/uapi/linux/suspend_ioctls.h
@@ -25,6 +25,18 @@ struct uswsusp_key_blob {
__u8 nonce[USWSUSP_KEY_NONCE_SIZE];
} __attribute__((packed));

+/*
+ * Allow user mode to fold in key material for the data portion of the hibernate
+ * image.
+ */
+struct uswsusp_user_key {
+ /* Kernel returns the metadata size. */
+ __kernel_loff_t meta_size;
+ __u32 key_len;
+ __u8 key[32];
+ __u32 pad;
+};
+
#define SNAPSHOT_IOC_MAGIC '3'
#define SNAPSHOT_FREEZE _IO(SNAPSHOT_IOC_MAGIC, 1)
#define SNAPSHOT_UNFREEZE _IO(SNAPSHOT_IOC_MAGIC, 2)
@@ -42,6 +54,7 @@ struct uswsusp_key_blob {
#define SNAPSHOT_AVAIL_SWAP_SIZE _IOR(SNAPSHOT_IOC_MAGIC, 19, __kernel_loff_t)
#define SNAPSHOT_ALLOC_SWAP_PAGE _IOR(SNAPSHOT_IOC_MAGIC, 20, __kernel_loff_t)
#define SNAPSHOT_ENABLE_ENCRYPTION _IOWR(SNAPSHOT_IOC_MAGIC, 21, struct uswsusp_key_blob)
-#define SNAPSHOT_IOC_MAXNR 21
+#define SNAPSHOT_SET_USER_KEY _IOWR(SNAPSHOT_IOC_MAGIC, 22, struct uswsusp_user_key)
+#define SNAPSHOT_IOC_MAXNR 22

#endif /* _LINUX_SUSPEND_IOCTLS_H */
diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
index 2f8acbd87b34dc..35bf48b925ebf6 100644
--- a/kernel/power/Kconfig
+++ b/kernel/power/Kconfig
@@ -97,6 +97,7 @@ config ENCRYPTED_HIBERNATION
depends on HIBERNATION_SNAPSHOT_DEV
depends on CRYPTO_AEAD2=y
depends on TRUSTED_KEYS=y
+ select CRYPTO_LIB_SHA256
default n
help
Enable support for kernel-based encryption of hibernation snapshots
diff --git a/kernel/power/power.h b/kernel/power/power.h
index b4f43394320961..5955e5cf692302 100644
--- a/kernel/power/power.h
+++ b/kernel/power/power.h
@@ -151,6 +151,7 @@ struct snapshot_handle {

extern unsigned int snapshot_additional_pages(struct zone *zone);
extern unsigned long snapshot_get_image_size(void);
+extern unsigned long snapshot_get_meta_page_count(void);
extern int snapshot_read_next(struct snapshot_handle *handle);
extern int snapshot_write_next(struct snapshot_handle *handle);
extern void snapshot_write_finalize(struct snapshot_handle *handle);
diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c
index 7ff4fc66f7500c..50167a37c5bf23 100644
--- a/kernel/power/snapenc.c
+++ b/kernel/power/snapenc.c
@@ -6,6 +6,7 @@
#include <crypto/gcm.h>
#include <keys/trusted-type.h>
#include <linux/key-type.h>
+#include <crypto/sha.h>
#include <linux/random.h>
#include <linux/mm.h>
#include <linux/tpm.h>
@@ -21,6 +22,38 @@ static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256,
0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05,
0x5f, 0x49}};

+/* Derive a key from the kernel and user keys for data encryption. */
+static int snapshot_use_user_key(struct snapshot_data *data)
+{
+ u8 digest[SHA256_DIGEST_SIZE];
+ struct trusted_key_payload *payload = data->key->payload.data[0];
+ struct sha256_state sha256_state;
+
+ /*
+ * Hash the kernel key and the user key together. This folds in the user
+ * key, but not in a way that gives the user mode predictable control
+ * over the key bits.
+ */
+ sha256_init(&sha256_state);
+ sha256_update(&sha256_state, payload->key, SNAPSHOT_ENCRYPTION_KEY_SIZE);
+ sha256_update(&sha256_state, data->user_key, sizeof(data->user_key));
+ sha256_final(&sha256_state, digest);
+ return crypto_aead_setkey(data->aead_tfm,
+ digest,
+ SNAPSHOT_ENCRYPTION_KEY_SIZE);
+}
+
+/* Check to see if it's time to switch to the user key, and do it if so. */
+static int snapshot_check_user_key_switch(struct snapshot_data *data)
+{
+ if (data->user_key_valid && data->meta_size &&
+ data->crypt_total == data->meta_size) {
+ return snapshot_use_user_key(data);
+ }
+
+ return 0;
+}
+
/* Encrypt more data from the snapshot into the staging area. */
static int snapshot_encrypt_refill(struct snapshot_data *data)
{
@@ -32,6 +65,15 @@ static int snapshot_encrypt_refill(struct snapshot_data *data)
int pg_idx;
int res;

+ if (data->crypt_total == 0) {
+ data->meta_size = snapshot_get_meta_page_count() << PAGE_SHIFT;
+
+ } else {
+ res = snapshot_check_user_key_switch(data);
+ if (res)
+ return res;
+ }
+
/*
* The first buffer is the associated data, set to the offset to prevent
* attacks that rearrange chunks.
@@ -42,6 +84,11 @@ static int snapshot_encrypt_refill(struct snapshot_data *data)
for (pg_idx = 0; pg_idx < CHUNK_SIZE; pg_idx++) {
void *buf = data->crypt_pages[pg_idx];

+ /* Stop at the meta page boundary to potentially switch keys. */
+ if (total &&
+ ((data->crypt_total + total) == data->meta_size))
+ break;
+
res = snapshot_read_next(&data->handle);
if (res < 0)
return res;
@@ -114,10 +161,10 @@ static int snapshot_decrypt_drain(struct snapshot_data *data)
sg_set_buf(&data->sg[1 + pg_idx], data->crypt_pages[pg_idx], PAGE_SIZE);

/*
- * It's possible this is the final decrypt, and there are fewer than
- * CHUNK_SIZE pages. If this is the case we would have just written the
- * auth tag into the first few bytes of a new page. Copy to the tag if
- * so.
+ * It's possible this is the final decrypt, or the final decrypt of the
+ * meta region, and there are fewer than CHUNK_SIZE pages. If this is
+ * the case we would have just written the auth tag into the first few
+ * bytes of a new page. Copy to the tag if so.
*/
if ((page_count < CHUNK_SIZE) &&
(data->crypt_offset - total) == sizeof(data->auth_tag)) {
@@ -172,7 +219,14 @@ static int snapshot_decrypt_drain(struct snapshot_data *data)
total += PAGE_SIZE;
}

+ if (data->crypt_total == 0)
+ data->meta_size = snapshot_get_meta_page_count() << PAGE_SHIFT;
+
data->crypt_total += total;
+ res = snapshot_check_user_key_switch(data);
+ if (res)
+ return res;
+
return 0;
}

@@ -221,8 +275,26 @@ static ssize_t snapshot_write_next_encrypted(struct snapshot_data *data,
if (data->crypt_offset < (PAGE_SIZE * CHUNK_SIZE)) {
size_t pg_idx = data->crypt_offset >> PAGE_SHIFT;
size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1);
+ size_t size_avail = PAGE_SIZE;
*buf = data->crypt_pages[pg_idx] + pg_off;
- return PAGE_SIZE - pg_off;
+
+ /*
+ * If this is the boundary where the meta pages end, then just
+ * return enough for the auth tag.
+ */
+ if (data->meta_size && (data->crypt_total < data->meta_size)) {
+ uint64_t total_done =
+ data->crypt_total + data->crypt_offset;
+
+ if ((total_done >= data->meta_size) &&
+ (total_done <
+ (data->meta_size + SNAPSHOT_AUTH_TAG_SIZE))) {
+
+ size_avail = SNAPSHOT_AUTH_TAG_SIZE;
+ }
+ }
+
+ return size_avail - pg_off;
}

/* Use offsets just beyond the size to return the tag. */
@@ -304,9 +376,15 @@ ssize_t snapshot_write_encrypted(struct snapshot_data *data,
break;
}

- /* Drain the encrypted buffer if it's full. */
+ /*
+ * Drain the encrypted buffer if it's full, or if we hit the end
+ * of the meta pages and need a key change.
+ */
if ((data->crypt_offset >=
- ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE))) {
+ ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE)) ||
+ (data->meta_size && (data->crypt_total < data->meta_size) &&
+ ((data->crypt_total + data->crypt_offset) ==
+ (data->meta_size + SNAPSHOT_AUTH_TAG_SIZE)))) {

int rc;

@@ -350,6 +428,8 @@ void snapshot_teardown_encryption(struct snapshot_data *data)
data->crypt_pages[i] = NULL;
}
}
+
+ memset(data->user_key, 0, sizeof(data->user_key));
}

static int snapshot_setup_encryption_common(struct snapshot_data *data)
@@ -359,6 +439,7 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data)
data->crypt_total = 0;
data->crypt_offset = 0;
data->crypt_size = 0;
+ data->user_key_valid = false;
memset(data->crypt_pages, 0, sizeof(data->crypt_pages));
/* This only works once per hibernate. */
if (data->aead_tfm)
@@ -661,15 +742,72 @@ int snapshot_set_encryption_key(struct snapshot_data *data,
return rc;
}

-loff_t snapshot_get_encrypted_image_size(loff_t raw_size)
+static loff_t snapshot_encrypted_byte_count(loff_t plain_size)
{
- loff_t pages = raw_size >> PAGE_SHIFT;
+ loff_t pages = plain_size >> PAGE_SHIFT;
loff_t chunks = (pages + (CHUNK_SIZE - 1)) / CHUNK_SIZE;
/*
* The encrypted size is the normal size, plus a stitched in
* authentication tag for every chunk of pages.
*/
- return raw_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE);
+ return plain_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE);
+}
+
+static loff_t snapshot_get_meta_data_size(void)
+{
+ loff_t pages = snapshot_get_meta_page_count();
+
+ return snapshot_encrypted_byte_count(pages << PAGE_SHIFT);
+}
+
+int snapshot_set_user_key(struct snapshot_data *data,
+ struct uswsusp_user_key __user *key)
+{
+ struct uswsusp_user_key user_key;
+ unsigned int key_len;
+ int rc;
+ loff_t size;
+
+ /*
+ * Return the metadata size, the number of bytes that can be fed in before
+ * the user data key is needed at resume time.
+ */
+ size = snapshot_get_meta_data_size();
+ rc = put_user(size, &key->meta_size);
+ if (rc)
+ return rc;
+
+ rc = copy_from_user(&user_key, key, sizeof(struct uswsusp_user_key));
+ if (rc)
+ return rc;
+
+ key_len = min_t(__u32, user_key.key_len, sizeof(data->user_key));
+ if (key_len < 8)
+ return -EINVAL;
+
+ /* Don't allow it if it's too late. */
+ if (data->crypt_total > data->meta_size)
+ return -EBUSY;
+
+ memset(data->user_key, 0, sizeof(data->user_key));
+ memcpy(data->user_key, user_key.key, key_len);
+ data->user_key_valid = true;
+ /* Install the key if the user is just under the wire. */
+ rc = snapshot_check_user_key_switch(data);
+ if (rc)
+ return rc;
+
+ return 0;
+}
+
+loff_t snapshot_get_encrypted_image_size(loff_t raw_size)
+{
+ loff_t pages = raw_size >> PAGE_SHIFT;
+ loff_t meta_size;
+
+ pages -= snapshot_get_meta_page_count();
+ meta_size = snapshot_get_meta_data_size();
+ return snapshot_encrypted_byte_count(pages << PAGE_SHIFT) + meta_size;
}

int snapshot_finalize_decrypted_image(struct snapshot_data *data)
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 2a406753af9049..026ee511633bc9 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -2083,6 +2083,11 @@ unsigned long snapshot_get_image_size(void)
return nr_copy_pages + nr_meta_pages + 1;
}

+unsigned long snapshot_get_meta_page_count(void)
+{
+ return nr_meta_pages + 1;
+}
+
static int init_header(struct swsusp_info *info)
{
memset(info, 0, sizeof(struct swsusp_info));
diff --git a/kernel/power/user.c b/kernel/power/user.c
index bba5cdbd2c0239..a66e32c9596da8 100644
--- a/kernel/power/user.c
+++ b/kernel/power/user.c
@@ -427,6 +427,10 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
error = snapshot_set_encryption_key(data, (void __user *)arg);
break;

+ case SNAPSHOT_SET_USER_KEY:
+ error = snapshot_set_user_key(data, (void __user *)arg);
+ break;
+
default:
error = -ENOTTY;

diff --git a/kernel/power/user.h b/kernel/power/user.h
index 6c86fb64ebe13e..c1e80a835d0a38 100644
--- a/kernel/power/user.h
+++ b/kernel/power/user.h
@@ -32,6 +32,9 @@ struct snapshot_data {
uint64_t nonce_low;
uint64_t nonce_high;
struct key *key;
+ u8 user_key[SNAPSHOT_ENCRYPTION_KEY_SIZE];
+ bool user_key_valid;
+ uint64_t meta_size;
#endif

};
@@ -55,6 +58,9 @@ int snapshot_get_encryption_key(struct snapshot_data *data,
int snapshot_set_encryption_key(struct snapshot_data *data,
struct uswsusp_key_blob __user *key);

+int snapshot_set_user_key(struct snapshot_data *data,
+ struct uswsusp_user_key __user *key);
+
loff_t snapshot_get_encrypted_image_size(loff_t raw_size);

int snapshot_finalize_decrypted_image(struct snapshot_data *data);
@@ -89,6 +95,12 @@ static int snapshot_set_encryption_key(struct snapshot_data *data,
return -ENOTTY;
}

+static int snapshot_set_user_key(struct snapshot_data *data,
+ struct uswsusp_user_key __user *key)
+{
+ return -ENOTTY;
+}
+
static loff_t snapshot_get_encrypted_image_size(loff_t raw_size)
{
return raw_size;
--
2.38.1.431.g37b22c650d-goog


2022-11-03 18:46:17

by Evan Green

[permalink] [raw]
Subject: [PATCH v4 07/11] PM: hibernate: Add kernel-based encryption

Enabling the kernel to be able to do encryption and integrity checks on
the hibernate image prevents a malicious userspace from escalating to
kernel execution via hibernation resume. As a first step toward this, add
the scaffolding needed for the kernel to do AEAD encryption on the
hibernate image, giving us both secrecy and integrity.

We currently hardwire the encryption to be gcm(aes) in 16-page chunks.
This strikes a balance between minimizing the authentication tag
overhead on storage, and keeping a modest sized staging buffer. With
this chunk size, we'd generate 2MB of authentication tag data on an 8GB
hiberation image.

The encryption currently sits on top of the core snapshot functionality,
wired up only if requested in the uswsusp path. This could potentially
be lowered into the common snapshot code given a mechanism to stitch the
key contents into the image itself.

To avoid forcing usermode to deal with sequencing the auth tags in with
the data, we stitch the auth tags in to the snapshot after each chunk of
pages. This complicates the read and write functions, as we roll through
the flow of (for read) 1) fill the staging buffer with encrypted data,
2) feed the data pages out to user mode, 3) feed the tag out to user
mode. To avoid having each syscall return a small and variable amount
of data, the encrypted versions of read and write operate in a loop,
allowing an arbitrary amount of data through per syscall.

One alternative that would simplify things here would be a streaming
interface to AEAD. Then we could just stream the entire hibernate image
through directly, and handle a single tag at the end. However there is a
school of thought that suggests a streaming interface to AEAD represents
a loaded footgun, as it tempts the caller to act on the decrypted but
not yet verified data, defeating the purpose of AEAD.

With this change alone, we don't actually protect ourselves from
malicious userspace at all, since we kindly hand the key in plaintext
to usermode. In later changes, we'll seal the key with the TPM
before handing it back to usermode, so they can't decrypt or tamper with
the key themselves.

Signed-off-by: Evan Green <[email protected]>
---

Changes in v4:
- Local ordering and whitespace changes (Jarkko)

Documentation/power/userland-swsusp.rst | 8 +
include/uapi/linux/suspend_ioctls.h | 15 +-
kernel/power/Kconfig | 13 +
kernel/power/Makefile | 1 +
kernel/power/snapenc.c | 493 ++++++++++++++++++++++++
kernel/power/user.c | 40 +-
kernel/power/user.h | 103 +++++
7 files changed, 661 insertions(+), 12 deletions(-)
create mode 100644 kernel/power/snapenc.c
create mode 100644 kernel/power/user.h

diff --git a/Documentation/power/userland-swsusp.rst b/Documentation/power/userland-swsusp.rst
index 1cf62d80a9ca10..f759915a78ce98 100644
--- a/Documentation/power/userland-swsusp.rst
+++ b/Documentation/power/userland-swsusp.rst
@@ -115,6 +115,14 @@ SNAPSHOT_S2RAM
to resume the system from RAM if there's enough battery power or restore
its state on the basis of the saved suspend image otherwise)

+SNAPSHOT_ENABLE_ENCRYPTION
+ Enables encryption of the hibernate image within the kernel. Upon suspend
+ (ie when the snapshot device was opened for reading), returns a blob
+ representing the random encryption key the kernel created to encrypt the
+ hibernate image with. Upon resume (ie when the snapshot device was opened
+ for writing), receives a blob from usermode containing the key material
+ previously returned during hibernate.
+
The device's read() operation can be used to transfer the snapshot image from
the kernel. It has the following limitations:

diff --git a/include/uapi/linux/suspend_ioctls.h b/include/uapi/linux/suspend_ioctls.h
index bcce04e21c0dce..b73026ef824bb9 100644
--- a/include/uapi/linux/suspend_ioctls.h
+++ b/include/uapi/linux/suspend_ioctls.h
@@ -13,6 +13,18 @@ struct resume_swap_area {
__u32 dev;
} __attribute__((packed));

+#define USWSUSP_KEY_NONCE_SIZE 16
+
+/*
+ * This structure is used to pass the kernel's hibernate encryption key in
+ * either direction.
+ */
+struct uswsusp_key_blob {
+ __u32 blob_len;
+ __u8 blob[512];
+ __u8 nonce[USWSUSP_KEY_NONCE_SIZE];
+} __attribute__((packed));
+
#define SNAPSHOT_IOC_MAGIC '3'
#define SNAPSHOT_FREEZE _IO(SNAPSHOT_IOC_MAGIC, 1)
#define SNAPSHOT_UNFREEZE _IO(SNAPSHOT_IOC_MAGIC, 2)
@@ -29,6 +41,7 @@ struct resume_swap_area {
#define SNAPSHOT_PREF_IMAGE_SIZE _IO(SNAPSHOT_IOC_MAGIC, 18)
#define SNAPSHOT_AVAIL_SWAP_SIZE _IOR(SNAPSHOT_IOC_MAGIC, 19, __kernel_loff_t)
#define SNAPSHOT_ALLOC_SWAP_PAGE _IOR(SNAPSHOT_IOC_MAGIC, 20, __kernel_loff_t)
-#define SNAPSHOT_IOC_MAXNR 20
+#define SNAPSHOT_ENABLE_ENCRYPTION _IOWR(SNAPSHOT_IOC_MAGIC, 21, struct uswsusp_key_blob)
+#define SNAPSHOT_IOC_MAXNR 21

#endif /* _LINUX_SUSPEND_IOCTLS_H */
diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
index 60a1d3051cc79a..cd574af0b43379 100644
--- a/kernel/power/Kconfig
+++ b/kernel/power/Kconfig
@@ -92,6 +92,19 @@ config HIBERNATION_SNAPSHOT_DEV

If in doubt, say Y.

+config ENCRYPTED_HIBERNATION
+ bool "Encryption support for userspace snapshots"
+ depends on HIBERNATION_SNAPSHOT_DEV
+ depends on CRYPTO_AEAD2=y
+ default n
+ help
+ Enable support for kernel-based encryption of hibernation snapshots
+ created by uswsusp tools.
+
+ Say N if userspace handles the image encryption.
+
+ If in doubt, say N.
+
config PM_STD_PARTITION
string "Default resume partition"
depends on HIBERNATION
diff --git a/kernel/power/Makefile b/kernel/power/Makefile
index 874ad834dc8daf..7be08f2e0e3b68 100644
--- a/kernel/power/Makefile
+++ b/kernel/power/Makefile
@@ -16,6 +16,7 @@ obj-$(CONFIG_SUSPEND) += suspend.o
obj-$(CONFIG_PM_TEST_SUSPEND) += suspend_test.o
obj-$(CONFIG_HIBERNATION) += hibernate.o snapshot.o swap.o
obj-$(CONFIG_HIBERNATION_SNAPSHOT_DEV) += user.o
+obj-$(CONFIG_ENCRYPTED_HIBERNATION) += snapenc.o
obj-$(CONFIG_PM_AUTOSLEEP) += autosleep.o
obj-$(CONFIG_PM_WAKELOCKS) += wakelock.o

diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c
new file mode 100644
index 00000000000000..f215df16dad4d3
--- /dev/null
+++ b/kernel/power/snapenc.c
@@ -0,0 +1,493 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* This file provides encryption support for system snapshots. */
+
+#include <linux/crypto.h>
+#include <crypto/aead.h>
+#include <crypto/gcm.h>
+#include <linux/random.h>
+#include <linux/mm.h>
+#include <linux/uaccess.h>
+
+#include "power.h"
+#include "user.h"
+
+/* Encrypt more data from the snapshot into the staging area. */
+static int snapshot_encrypt_refill(struct snapshot_data *data)
+{
+
+ struct aead_request *req = data->aead_req;
+ u8 nonce[GCM_AES_IV_SIZE];
+ DECLARE_CRYPTO_WAIT(wait);
+ size_t total = 0;
+ int pg_idx;
+ int res;
+
+ /*
+ * The first buffer is the associated data, set to the offset to prevent
+ * attacks that rearrange chunks.
+ */
+ sg_set_buf(&data->sg[0], &data->crypt_total, sizeof(data->crypt_total));
+
+ /* Load the crypt buffer with snapshot pages. */
+ for (pg_idx = 0; pg_idx < CHUNK_SIZE; pg_idx++) {
+ void *buf = data->crypt_pages[pg_idx];
+
+ res = snapshot_read_next(&data->handle);
+ if (res < 0)
+ return res;
+ if (res == 0)
+ break;
+
+ WARN_ON(res != PAGE_SIZE);
+
+ /*
+ * Copy the page into the staging area. A future optimization
+ * could potentially skip this copy for lowmem pages.
+ */
+ memcpy(buf, data_of(data->handle), PAGE_SIZE);
+ sg_set_buf(&data->sg[1 + pg_idx], buf, PAGE_SIZE);
+ total += PAGE_SIZE;
+ }
+
+ sg_set_buf(&data->sg[1 + pg_idx], &data->auth_tag, SNAPSHOT_AUTH_TAG_SIZE);
+ aead_request_set_callback(req, 0, crypto_req_done, &wait);
+ /*
+ * Use incrementing nonces for each chunk, since a 64 bit value won't
+ * roll into re-use for any given hibernate image.
+ */
+ memcpy(&nonce[0], &data->nonce_low, sizeof(data->nonce_low));
+ memcpy(&nonce[sizeof(data->nonce_low)],
+ &data->nonce_high,
+ sizeof(nonce) - sizeof(data->nonce_low));
+
+ data->nonce_low += 1;
+ /* Total does not include AAD or the auth tag. */
+ aead_request_set_crypt(req, data->sg, data->sg, total, nonce);
+ res = crypto_wait_req(crypto_aead_encrypt(req), &wait);
+ if (res)
+ return res;
+
+ data->crypt_size = total;
+ data->crypt_total += total;
+ return 0;
+}
+
+/* Decrypt data from the staging area and push it to the snapshot. */
+static int snapshot_decrypt_drain(struct snapshot_data *data)
+{
+ struct aead_request *req = data->aead_req;
+ u8 nonce[GCM_AES_IV_SIZE];
+ DECLARE_CRYPTO_WAIT(wait);
+ int page_count;
+ size_t total;
+ int pg_idx;
+ int res;
+
+ /* Set up the associated data. */
+ sg_set_buf(&data->sg[0], &data->crypt_total, sizeof(data->crypt_total));
+
+ /*
+ * Get the number of full pages, which could be short at the end. There
+ * should also be a tag at the end, so the offset won't be an even page.
+ */
+ page_count = data->crypt_offset >> PAGE_SHIFT;
+ total = page_count << PAGE_SHIFT;
+ if ((total == 0) || (total == data->crypt_offset))
+ return -EINVAL;
+
+ /*
+ * Load the sg list with the crypt buffer. Inline decrypt back into the
+ * staging buffer. A future optimization could decrypt directly into
+ * lowmem pages.
+ */
+ for (pg_idx = 0; pg_idx < page_count; pg_idx++)
+ sg_set_buf(&data->sg[1 + pg_idx], data->crypt_pages[pg_idx], PAGE_SIZE);
+
+ /*
+ * It's possible this is the final decrypt, and there are fewer than
+ * CHUNK_SIZE pages. If this is the case we would have just written the
+ * auth tag into the first few bytes of a new page. Copy to the tag if
+ * so.
+ */
+ if ((page_count < CHUNK_SIZE) &&
+ (data->crypt_offset - total) == sizeof(data->auth_tag)) {
+
+ memcpy(data->auth_tag,
+ data->crypt_pages[pg_idx],
+ sizeof(data->auth_tag));
+
+ } else if (data->crypt_offset !=
+ ((CHUNK_SIZE << PAGE_SHIFT) + SNAPSHOT_AUTH_TAG_SIZE)) {
+
+ return -EINVAL;
+ }
+
+ sg_set_buf(&data->sg[1 + pg_idx], &data->auth_tag, SNAPSHOT_AUTH_TAG_SIZE);
+ aead_request_set_callback(req, 0, crypto_req_done, &wait);
+ memcpy(&nonce[0], &data->nonce_low, sizeof(data->nonce_low));
+ memcpy(&nonce[sizeof(data->nonce_low)],
+ &data->nonce_high,
+ sizeof(nonce) - sizeof(data->nonce_low));
+
+ data->nonce_low += 1;
+ aead_request_set_crypt(req, data->sg, data->sg, total + SNAPSHOT_AUTH_TAG_SIZE, nonce);
+ res = crypto_wait_req(crypto_aead_decrypt(req), &wait);
+ if (res)
+ return res;
+
+ data->crypt_size = 0;
+ data->crypt_offset = 0;
+
+ /* Push the decrypted pages further down the stack. */
+ total = 0;
+ for (pg_idx = 0; pg_idx < page_count; pg_idx++) {
+ void *buf = data->crypt_pages[pg_idx];
+
+ res = snapshot_write_next(&data->handle);
+ if (res < 0)
+ return res;
+ if (res == 0)
+ break;
+
+ if (!data_of(data->handle))
+ return -EINVAL;
+
+ WARN_ON(res != PAGE_SIZE);
+
+ /*
+ * Copy the page into the staging area. A future optimization
+ * could potentially skip this copy for lowmem pages.
+ */
+ memcpy(data_of(data->handle), buf, PAGE_SIZE);
+ total += PAGE_SIZE;
+ }
+
+ data->crypt_total += total;
+ return 0;
+}
+
+static ssize_t snapshot_read_next_encrypted(struct snapshot_data *data,
+ void **buf)
+{
+ size_t tag_off;
+
+ /* Refill the encrypted buffer if it's empty. */
+ if ((data->crypt_size == 0) ||
+ (data->crypt_offset >=
+ (data->crypt_size + SNAPSHOT_AUTH_TAG_SIZE))) {
+
+ int rc;
+
+ data->crypt_size = 0;
+ data->crypt_offset = 0;
+ rc = snapshot_encrypt_refill(data);
+ if (rc < 0)
+ return rc;
+ }
+
+ /* Return data pages if the offset is in that region. */
+ if (data->crypt_offset < data->crypt_size) {
+ size_t pg_idx = data->crypt_offset >> PAGE_SHIFT;
+ size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1);
+ *buf = data->crypt_pages[pg_idx] + pg_off;
+ return PAGE_SIZE - pg_off;
+ }
+
+ /* Use offsets just beyond the size to return the tag. */
+ tag_off = data->crypt_offset - data->crypt_size;
+ if (tag_off > SNAPSHOT_AUTH_TAG_SIZE)
+ tag_off = SNAPSHOT_AUTH_TAG_SIZE;
+
+ *buf = data->auth_tag + tag_off;
+ return SNAPSHOT_AUTH_TAG_SIZE - tag_off;
+}
+
+static ssize_t snapshot_write_next_encrypted(struct snapshot_data *data,
+ void **buf)
+{
+ size_t tag_off;
+
+ /* Return data pages if the offset is in that region. */
+ if (data->crypt_offset < (PAGE_SIZE * CHUNK_SIZE)) {
+ size_t pg_idx = data->crypt_offset >> PAGE_SHIFT;
+ size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1);
+ *buf = data->crypt_pages[pg_idx] + pg_off;
+ return PAGE_SIZE - pg_off;
+ }
+
+ /* Use offsets just beyond the size to return the tag. */
+ tag_off = data->crypt_offset - (PAGE_SIZE * CHUNK_SIZE);
+ if (tag_off > SNAPSHOT_AUTH_TAG_SIZE)
+ tag_off = SNAPSHOT_AUTH_TAG_SIZE;
+
+ *buf = data->auth_tag + tag_off;
+ return SNAPSHOT_AUTH_TAG_SIZE - tag_off;
+}
+
+ssize_t snapshot_read_encrypted(struct snapshot_data *data,
+ char __user *buf, size_t count, loff_t *offp)
+{
+ ssize_t total = 0;
+
+ /* Loop getting buffers of varying sizes and copying to userspace. */
+ while (count) {
+ size_t copy_size;
+ size_t not_done;
+ void *src;
+ ssize_t src_size = snapshot_read_next_encrypted(data, &src);
+
+ if (src_size <= 0) {
+ if (total == 0)
+ return src_size;
+
+ break;
+ }
+
+ copy_size = min(count, (size_t)src_size);
+ not_done = copy_to_user(buf + total, src, copy_size);
+ copy_size -= not_done;
+ total += copy_size;
+ count -= copy_size;
+ data->crypt_offset += copy_size;
+ if (copy_size == 0) {
+ if (total == 0)
+ return -EFAULT;
+
+ break;
+ }
+ }
+
+ *offp += total;
+ return total;
+}
+
+ssize_t snapshot_write_encrypted(struct snapshot_data *data,
+ const char __user *buf, size_t count,
+ loff_t *offp)
+{
+ ssize_t total = 0;
+
+ /* Loop getting buffers of varying sizes and copying from. */
+ while (count) {
+ size_t copy_size;
+ size_t not_done;
+ void *dst;
+ ssize_t dst_size = snapshot_write_next_encrypted(data, &dst);
+
+ if (dst_size <= 0) {
+ if (total == 0)
+ return dst_size;
+
+ break;
+ }
+
+ copy_size = min(count, (size_t)dst_size);
+ not_done = copy_from_user(dst, buf + total, copy_size);
+ copy_size -= not_done;
+ total += copy_size;
+ count -= copy_size;
+ data->crypt_offset += copy_size;
+ if (copy_size == 0) {
+ if (total == 0)
+ return -EFAULT;
+
+ break;
+ }
+
+ /* Drain the encrypted buffer if it's full. */
+ if ((data->crypt_offset >=
+ ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE))) {
+
+ int rc;
+
+ rc = snapshot_decrypt_drain(data);
+ if (rc < 0)
+ return rc;
+ }
+ }
+
+ *offp += total;
+ return total;
+}
+
+void snapshot_teardown_encryption(struct snapshot_data *data)
+{
+ int i;
+
+ if (data->aead_req) {
+ aead_request_free(data->aead_req);
+ data->aead_req = NULL;
+ }
+
+ if (data->aead_tfm) {
+ crypto_free_aead(data->aead_tfm);
+ data->aead_tfm = NULL;
+ }
+
+ for (i = 0; i < CHUNK_SIZE; i++) {
+ if (data->crypt_pages[i]) {
+ free_page((unsigned long)data->crypt_pages[i]);
+ data->crypt_pages[i] = NULL;
+ }
+ }
+}
+
+static int snapshot_setup_encryption_common(struct snapshot_data *data)
+{
+ int i, rc;
+
+ data->crypt_total = 0;
+ data->crypt_offset = 0;
+ data->crypt_size = 0;
+ memset(data->crypt_pages, 0, sizeof(data->crypt_pages));
+ /* This only works once per hibernate. */
+ if (data->aead_tfm)
+ return -EINVAL;
+
+ /* Set up the encryption transform */
+ data->aead_tfm = crypto_alloc_aead("gcm(aes)", 0, 0);
+ if (IS_ERR(data->aead_tfm)) {
+ rc = PTR_ERR(data->aead_tfm);
+ data->aead_tfm = NULL;
+ return rc;
+ }
+
+ rc = -ENOMEM;
+ data->aead_req = aead_request_alloc(data->aead_tfm, GFP_KERNEL);
+ if (data->aead_req == NULL)
+ goto setup_fail;
+
+ /* Allocate the staging area */
+ for (i = 0; i < CHUNK_SIZE; i++) {
+ data->crypt_pages[i] = (void *)__get_free_page(GFP_ATOMIC);
+ if (data->crypt_pages[i] == NULL)
+ goto setup_fail;
+ }
+
+ sg_init_table(data->sg, CHUNK_SIZE + 2);
+
+ /*
+ * The associated data will be the offset so that blocks can't be
+ * rearranged.
+ */
+ aead_request_set_ad(data->aead_req, sizeof(data->crypt_total));
+ rc = crypto_aead_setauthsize(data->aead_tfm, SNAPSHOT_AUTH_TAG_SIZE);
+ if (rc)
+ goto setup_fail;
+
+ return 0;
+
+setup_fail:
+ snapshot_teardown_encryption(data);
+ return rc;
+}
+
+int snapshot_get_encryption_key(struct snapshot_data *data,
+ struct uswsusp_key_blob __user *key)
+{
+ u8 aead_key[SNAPSHOT_ENCRYPTION_KEY_SIZE];
+ u8 nonce[USWSUSP_KEY_NONCE_SIZE];
+ int rc;
+
+ /* Don't pull a random key from a world that can be reset. */
+ if (data->ready)
+ return -EPIPE;
+
+ rc = snapshot_setup_encryption_common(data);
+ if (rc)
+ return rc;
+
+ /* Build a random starting nonce. */
+ get_random_bytes(nonce, sizeof(nonce));
+ memcpy(&data->nonce_low, &nonce[0], sizeof(data->nonce_low));
+ memcpy(&data->nonce_high, &nonce[8], sizeof(data->nonce_high));
+ /* Build a random key */
+ get_random_bytes(aead_key, sizeof(aead_key));
+ rc = crypto_aead_setkey(data->aead_tfm, aead_key, sizeof(aead_key));
+ if (rc)
+ goto fail;
+
+ /* Hand the key back to user mode (to be changed!) */
+ rc = put_user(sizeof(struct uswsusp_key_blob), &key->blob_len);
+ if (rc)
+ goto fail;
+
+ rc = copy_to_user(&key->blob, &aead_key, sizeof(aead_key));
+ if (rc)
+ goto fail;
+
+ rc = copy_to_user(&key->nonce, &nonce, sizeof(nonce));
+ if (rc)
+ goto fail;
+
+ return 0;
+
+fail:
+ snapshot_teardown_encryption(data);
+ return rc;
+}
+
+int snapshot_set_encryption_key(struct snapshot_data *data,
+ struct uswsusp_key_blob __user *key)
+{
+ struct uswsusp_key_blob blob;
+ int rc;
+
+ /* It's too late if data's been pushed in. */
+ if (data->handle.cur)
+ return -EPIPE;
+
+ rc = snapshot_setup_encryption_common(data);
+ if (rc)
+ return rc;
+
+ /* Load the key from user mode. */
+ rc = copy_from_user(&blob, key, sizeof(struct uswsusp_key_blob));
+ if (rc)
+ goto crypto_setup_fail;
+
+ if (blob.blob_len != sizeof(struct uswsusp_key_blob)) {
+ rc = -EINVAL;
+ goto crypto_setup_fail;
+ }
+
+ rc = crypto_aead_setkey(data->aead_tfm,
+ blob.blob,
+ SNAPSHOT_ENCRYPTION_KEY_SIZE);
+
+ if (rc)
+ goto crypto_setup_fail;
+
+ /* Load the starting nonce. */
+ memcpy(&data->nonce_low, &blob.nonce[0], sizeof(data->nonce_low));
+ memcpy(&data->nonce_high, &blob.nonce[8], sizeof(data->nonce_high));
+ return 0;
+
+crypto_setup_fail:
+ snapshot_teardown_encryption(data);
+ return rc;
+}
+
+loff_t snapshot_get_encrypted_image_size(loff_t raw_size)
+{
+ loff_t pages = raw_size >> PAGE_SHIFT;
+ loff_t chunks = (pages + (CHUNK_SIZE - 1)) / CHUNK_SIZE;
+ /*
+ * The encrypted size is the normal size, plus a stitched in
+ * authentication tag for every chunk of pages.
+ */
+ return raw_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE);
+}
+
+int snapshot_finalize_decrypted_image(struct snapshot_data *data)
+{
+ int rc;
+
+ if (data->crypt_offset != 0) {
+ rc = snapshot_decrypt_drain(data);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
diff --git a/kernel/power/user.c b/kernel/power/user.c
index 3a4e70366f354c..bba5cdbd2c0239 100644
--- a/kernel/power/user.c
+++ b/kernel/power/user.c
@@ -25,19 +25,10 @@
#include <linux/uaccess.h>

#include "power.h"
+#include "user.h"

static bool need_wait;
-
-static struct snapshot_data {
- struct snapshot_handle handle;
- int swap;
- int mode;
- bool frozen;
- bool ready;
- bool platform_support;
- bool free_bitmaps;
- dev_t dev;
-} snapshot_state;
+struct snapshot_data snapshot_state;

int is_hibernate_resume_dev(dev_t dev)
{
@@ -122,6 +113,7 @@ static int snapshot_release(struct inode *inode, struct file *filp)
} else if (data->free_bitmaps) {
free_basic_memory_bitmaps();
}
+ snapshot_teardown_encryption(data);
pm_notifier_call_chain(data->mode == O_RDONLY ?
PM_POST_HIBERNATION : PM_POST_RESTORE);
hibernate_release();
@@ -146,6 +138,12 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf,
res = -ENODATA;
goto Unlock;
}
+
+ if (snapshot_encryption_enabled(data)) {
+ res = snapshot_read_encrypted(data, buf, count, offp);
+ goto Unlock;
+ }
+
if (!pg_offp) { /* on page boundary? */
res = snapshot_read_next(&data->handle);
if (res <= 0)
@@ -182,6 +180,11 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf,

data = filp->private_data;

+ if (snapshot_encryption_enabled(data)) {
+ res = snapshot_write_encrypted(data, buf, count, offp);
+ goto unlock;
+ }
+
if (!pg_offp) {
res = snapshot_write_next(&data->handle);
if (res <= 0)
@@ -317,6 +320,12 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
break;

case SNAPSHOT_ATOMIC_RESTORE:
+ if (snapshot_encryption_enabled(data)) {
+ error = snapshot_finalize_decrypted_image(data);
+ if (error)
+ break;
+ }
+
snapshot_write_finalize(&data->handle);
if (data->mode != O_WRONLY || !data->frozen ||
!snapshot_image_loaded(&data->handle)) {
@@ -352,6 +361,8 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
}
size = snapshot_get_image_size();
size <<= PAGE_SHIFT;
+ if (snapshot_encryption_enabled(data))
+ size = snapshot_get_encrypted_image_size(size);
error = put_user(size, (loff_t __user *)arg);
break;

@@ -409,6 +420,13 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
error = snapshot_set_swap_area(data, (void __user *)arg);
break;

+ case SNAPSHOT_ENABLE_ENCRYPTION:
+ if (data->mode == O_RDONLY)
+ error = snapshot_get_encryption_key(data, (void __user *)arg);
+ else
+ error = snapshot_set_encryption_key(data, (void __user *)arg);
+ break;
+
default:
error = -ENOTTY;

diff --git a/kernel/power/user.h b/kernel/power/user.h
new file mode 100644
index 00000000000000..ac429782abff85
--- /dev/null
+++ b/kernel/power/user.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/crypto.h>
+#include <crypto/aead.h>
+#include <crypto/aes.h>
+
+#define SNAPSHOT_ENCRYPTION_KEY_SIZE AES_KEYSIZE_128
+#define SNAPSHOT_AUTH_TAG_SIZE 16
+
+/* Define the number of pages in a single AEAD encryption chunk. */
+#define CHUNK_SIZE 16
+
+struct snapshot_data {
+ struct snapshot_handle handle;
+ int swap;
+ int mode;
+ bool frozen;
+ bool ready;
+ bool platform_support;
+ bool free_bitmaps;
+ dev_t dev;
+
+#if defined(CONFIG_ENCRYPTED_HIBERNATION)
+ struct crypto_aead *aead_tfm;
+ struct aead_request *aead_req;
+ void *crypt_pages[CHUNK_SIZE];
+ u8 auth_tag[SNAPSHOT_AUTH_TAG_SIZE];
+ struct scatterlist sg[CHUNK_SIZE + 2]; /* Add room for AD and auth tag. */
+ size_t crypt_offset;
+ size_t crypt_size;
+ uint64_t crypt_total;
+ uint64_t nonce_low;
+ uint64_t nonce_high;
+#endif
+
+};
+
+extern struct snapshot_data snapshot_state;
+
+/* kernel/power/swapenc.c routines */
+#if defined(CONFIG_ENCRYPTED_HIBERNATION)
+
+ssize_t snapshot_read_encrypted(struct snapshot_data *data,
+ char __user *buf, size_t count, loff_t *offp);
+
+ssize_t snapshot_write_encrypted(struct snapshot_data *data,
+ const char __user *buf, size_t count,
+ loff_t *offp);
+
+void snapshot_teardown_encryption(struct snapshot_data *data);
+int snapshot_get_encryption_key(struct snapshot_data *data,
+ struct uswsusp_key_blob __user *key);
+
+int snapshot_set_encryption_key(struct snapshot_data *data,
+ struct uswsusp_key_blob __user *key);
+
+loff_t snapshot_get_encrypted_image_size(loff_t raw_size);
+
+int snapshot_finalize_decrypted_image(struct snapshot_data *data);
+
+#define snapshot_encryption_enabled(data) (!!(data)->aead_tfm)
+
+#else
+
+ssize_t snapshot_read_encrypted(struct snapshot_data *data,
+ char __user *buf, size_t count, loff_t *offp)
+{
+ return -ENOTTY;
+}
+
+ssize_t snapshot_write_encrypted(struct snapshot_data *data,
+ const char __user *buf, size_t count,
+ loff_t *offp)
+{
+ return -ENOTTY;
+}
+
+static void snapshot_teardown_encryption(struct snapshot_data *data) {}
+static int snapshot_get_encryption_key(struct snapshot_data *data,
+ struct uswsusp_key_blob __user *key)
+{
+ return -ENOTTY;
+}
+
+static int snapshot_set_encryption_key(struct snapshot_data *data,
+ struct uswsusp_key_blob __user *key)
+{
+ return -ENOTTY;
+}
+
+static loff_t snapshot_get_encrypted_image_size(loff_t raw_size)
+{
+ return raw_size;
+}
+
+static int snapshot_finalize_decrypted_image(struct snapshot_data *data)
+{
+ return -ENOTTY;
+}
+
+#define snapshot_encryption_enabled(data) (0)
+
+#endif
--
2.38.1.431.g37b22c650d-goog


2022-11-03 18:46:25

by Evan Green

[permalink] [raw]
Subject: [PATCH v4 05/11] security: keys: trusted: Allow storage of PCR values in creation data

From: Matthew Garrett <[email protected]>

When TPMs generate keys, they can also generate some information
describing the state of the PCRs at creation time. This data can then
later be certified by the TPM, allowing verification of the PCR values.
This allows us to determine the state of the system at the time a key
was generated. Add an additional argument to the trusted key creation
options, allowing the user to provide the set of PCRs that should have
their values incorporated into the creation data.

Link: https://lore.kernel.org/lkml/[email protected]/
Signed-off-by: Matthew Garrett <[email protected]>
Signed-off-by: Evan Green <[email protected]>
Reviewed-by: Ben Boeckel <[email protected]>
---

(no changes since v3)

Changes in v3:
- Clarified creationpcrs documentation (Ben)

.../security/keys/trusted-encrypted.rst | 6 +++++
include/keys/trusted-type.h | 1 +
security/keys/trusted-keys/trusted_tpm1.c | 9 +++++++
security/keys/trusted-keys/trusted_tpm2.c | 25 +++++++++++++++++--
4 files changed, 39 insertions(+), 2 deletions(-)

diff --git a/Documentation/security/keys/trusted-encrypted.rst b/Documentation/security/keys/trusted-encrypted.rst
index 9bc9db8ec6517c..a1872964fe862f 100644
--- a/Documentation/security/keys/trusted-encrypted.rst
+++ b/Documentation/security/keys/trusted-encrypted.rst
@@ -199,6 +199,12 @@ Usage::
policyhandle= handle to an authorization policy session that defines the
same policy and with the same hash algorithm as was used to
seal the key.
+ creationpcrs= hex integer representing the set of PCRs to be
+ included in the creation data. For each bit set, the
+ corresponding PCR will be included in the key creation
+ data. Bit 0 corresponds to PCR0. Currently only the first
+ PC standard 24 PCRs are supported on the currently active
+ bank. Leading zeroes are optional. TPM2 only.

"keyctl print" returns an ascii hex copy of the sealed key, which is in standard
TPM_STORED_DATA format. The key length for new keys are always in bytes.
diff --git a/include/keys/trusted-type.h b/include/keys/trusted-type.h
index 209086fed240a5..8523d41507b2a4 100644
--- a/include/keys/trusted-type.h
+++ b/include/keys/trusted-type.h
@@ -54,6 +54,7 @@ struct trusted_key_options {
uint32_t policydigest_len;
unsigned char policydigest[MAX_DIGEST_SIZE];
uint32_t policyhandle;
+ uint32_t creation_pcrs;
};

struct trusted_key_ops {
diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c
index aa108bea6739b3..2975827c01bec0 100644
--- a/security/keys/trusted-keys/trusted_tpm1.c
+++ b/security/keys/trusted-keys/trusted_tpm1.c
@@ -713,6 +713,7 @@ enum {
Opt_hash,
Opt_policydigest,
Opt_policyhandle,
+ Opt_creationpcrs,
};

static const match_table_t key_tokens = {
@@ -725,6 +726,7 @@ static const match_table_t key_tokens = {
{Opt_hash, "hash=%s"},
{Opt_policydigest, "policydigest=%s"},
{Opt_policyhandle, "policyhandle=%s"},
+ {Opt_creationpcrs, "creationpcrs=%s"},
{Opt_err, NULL}
};

@@ -858,6 +860,13 @@ static int getoptions(char *c, struct trusted_key_payload *pay,
return -EINVAL;
opt->policyhandle = handle;
break;
+ case Opt_creationpcrs:
+ if (!tpm2)
+ return -EINVAL;
+ res = kstrtoint(args[0].from, 16, &opt->creation_pcrs);
+ if (res < 0)
+ return -EINVAL;
+ break;
default:
return -EINVAL;
}
diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c
index e1388d7d799a36..a7ad83bc0e5396 100644
--- a/security/keys/trusted-keys/trusted_tpm2.c
+++ b/security/keys/trusted-keys/trusted_tpm2.c
@@ -401,7 +401,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
struct tpm_buf buf;
u32 hash;
u32 flags;
- int i;
+ int i, j;
int rc;

for (i = 0; i < ARRAY_SIZE(tpm2_hash_map); i++) {
@@ -470,7 +470,28 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
tpm_buf_append_u16(&buf, 0);

/* creation PCR */
- tpm_buf_append_u32(&buf, 0);
+ if (options->creation_pcrs) {
+ /* One bank */
+ tpm_buf_append_u32(&buf, 1);
+ /* Which bank to use */
+ tpm_buf_append_u16(&buf, hash);
+ /* Length of the PCR bitmask */
+ tpm_buf_append_u8(&buf, 3);
+ /* PCR bitmask */
+ for (i = 0; i < 3; i++) {
+ char tmp = 0;
+
+ for (j = 0; j < 8; j++) {
+ char bit = (i * 8) + j;
+
+ if (options->creation_pcrs & (1 << bit))
+ tmp |= (1 << j);
+ }
+ tpm_buf_append_u8(&buf, tmp);
+ }
+ } else {
+ tpm_buf_append_u32(&buf, 0);
+ }

if (buf.flags & TPM_BUF_OVERFLOW) {
rc = -E2BIG;
--
2.38.1.431.g37b22c650d-goog


2022-11-03 18:46:39

by Evan Green

[permalink] [raw]
Subject: [PATCH v4 11/11] PM: hibernate: seal the encryption key with a PCR policy

The key blob is not secret, and by default the TPM will happily unseal
it regardless of system state. We can protect against that by sealing
the secret with a PCR policy - if the current PCR state doesn't match,
the TPM will refuse to release the secret. For now let's just seal it to
PCR 23. In the long term we may want a more flexible policy around this,
such as including PCR 7 for PCs or 0 for Chrome OS.

Link: https://lore.kernel.org/all/[email protected]/
Co-developed-by: Matthew Garrett <[email protected]>
Signed-off-by: Matthew Garrett <[email protected]>
Signed-off-by: Evan Green <[email protected]>

---

Changes in v4:
- Local variable ordering (Jarkko)

Changes in v3:
- Changed funky tag to Co-developed-by (Kees)

Changes in v2:
- Fix sparse warnings
- Fix session type comment (Andrey)
- Eliminate extra label in get/create_kernel_key() (Andrey)
- Call tpm_try_get_ops() before calling tpm2_flush_context().

include/linux/tpm.h | 4 +
kernel/power/snapenc.c | 166 +++++++++++++++++++++++++++++++++++++++--
2 files changed, 165 insertions(+), 5 deletions(-)

diff --git a/include/linux/tpm.h b/include/linux/tpm.h
index 9c2ee3e30ffa5d..252a8a92a7ff5b 100644
--- a/include/linux/tpm.h
+++ b/include/linux/tpm.h
@@ -233,18 +233,22 @@ enum tpm2_command_codes {
TPM2_CC_CONTEXT_LOAD = 0x0161,
TPM2_CC_CONTEXT_SAVE = 0x0162,
TPM2_CC_FLUSH_CONTEXT = 0x0165,
+ TPM2_CC_START_AUTH_SESSION = 0x0176,
TPM2_CC_VERIFY_SIGNATURE = 0x0177,
TPM2_CC_GET_CAPABILITY = 0x017A,
TPM2_CC_GET_RANDOM = 0x017B,
TPM2_CC_PCR_READ = 0x017E,
+ TPM2_CC_POLICY_PCR = 0x017F,
TPM2_CC_PCR_EXTEND = 0x0182,
TPM2_CC_EVENT_SEQUENCE_COMPLETE = 0x0185,
TPM2_CC_HASH_SEQUENCE_START = 0x0186,
+ TPM2_CC_POLICY_GET_DIGEST = 0x0189,
TPM2_CC_CREATE_LOADED = 0x0191,
TPM2_CC_LAST = 0x0193, /* Spec 1.36 */
};

enum tpm2_permanent_handles {
+ TPM2_RH_NULL = 0x40000007,
TPM2_RS_PW = 0x40000009,
};

diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c
index 2f421061498246..23f4d09ced578b 100644
--- a/kernel/power/snapenc.c
+++ b/kernel/power/snapenc.c
@@ -438,6 +438,111 @@ void snapshot_teardown_encryption(struct snapshot_data *data)
memset(data->user_key, 0, sizeof(data->user_key));
}

+static int tpm_setup_policy(struct tpm_chip *chip, int *session_handle)
+{
+ struct tpm_header *head;
+ struct tpm_buf buf;
+ char nonce[32] = {0x00};
+ int rc;
+
+ rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS,
+ TPM2_CC_START_AUTH_SESSION);
+ if (rc)
+ return rc;
+
+ /* Decrypt key */
+ tpm_buf_append_u32(&buf, TPM2_RH_NULL);
+
+ /* Auth entity */
+ tpm_buf_append_u32(&buf, TPM2_RH_NULL);
+
+ /* Nonce - blank is fine here */
+ tpm_buf_append_u16(&buf, sizeof(nonce));
+ tpm_buf_append(&buf, nonce, sizeof(nonce));
+
+ /* Encrypted secret - empty */
+ tpm_buf_append_u16(&buf, 0);
+
+ /* Session type - policy */
+ tpm_buf_append_u8(&buf, 0x01);
+
+ /* Encryption type - NULL */
+ tpm_buf_append_u16(&buf, TPM_ALG_NULL);
+
+ /* Hash type - SHA256 */
+ tpm_buf_append_u16(&buf, TPM_ALG_SHA256);
+
+ rc = tpm_send(chip, buf.data, tpm_buf_length(&buf));
+ if (rc)
+ goto out;
+
+ head = (struct tpm_header *)buf.data;
+ if (be32_to_cpu(head->length) != sizeof(struct tpm_header) +
+ sizeof(u32) + sizeof(u16) + sizeof(nonce)) {
+ rc = -EINVAL;
+ goto out;
+ }
+
+ *session_handle = be32_to_cpu(*(__be32 *)&buf.data[10]);
+ memcpy(nonce, &buf.data[16], sizeof(nonce));
+ tpm_buf_destroy(&buf);
+ rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_POLICY_PCR);
+ if (rc)
+ return rc;
+
+ tpm_buf_append_u32(&buf, *session_handle);
+
+ /* PCR digest - read from the PCR, we'll verify creation data later */
+ tpm_buf_append_u16(&buf, 0);
+
+ /* One PCR */
+ tpm_buf_append_u32(&buf, 1);
+
+ /* SHA256 banks */
+ tpm_buf_append_u16(&buf, TPM_ALG_SHA256);
+
+ /* Select PCR 23 */
+ tpm_buf_append_u32(&buf, 0x03000080);
+ rc = tpm_send(chip, buf.data, tpm_buf_length(&buf));
+ if (rc)
+ goto out;
+
+out:
+ tpm_buf_destroy(&buf);
+ return rc;
+}
+
+static int tpm_policy_get_digest(struct tpm_chip *chip, int handle,
+ char *digest)
+{
+ struct tpm_header *head;
+ struct tpm_buf buf;
+ int rc;
+
+ rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_POLICY_GET_DIGEST);
+ if (rc)
+ return rc;
+
+ tpm_buf_append_u32(&buf, handle);
+ rc = tpm_send(chip, buf.data, tpm_buf_length(&buf));
+
+ if (rc)
+ goto out;
+
+ head = (struct tpm_header *)buf.data;
+ if (be32_to_cpu(head->length) != sizeof(struct tpm_header) +
+ sizeof(u16) + SHA256_DIGEST_SIZE) {
+ rc = -EINVAL;
+ goto out;
+ }
+
+ memcpy(digest, &buf.data[12], SHA256_DIGEST_SIZE);
+
+out:
+ tpm_buf_destroy(&buf);
+ return rc;
+}
+
static int snapshot_setup_encryption_common(struct snapshot_data *data)
{
int i, rc;
@@ -492,11 +597,16 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data)
static int snapshot_create_kernel_key(struct snapshot_data *data)
{
/* Create a key sealed by the SRK. */
- char *keyinfo = "new\t32\tkeyhandle=0x81000000\tcreationpcrs=0x00800000";
+ const char *keytemplate =
+ "new\t32\tkeyhandle=0x81000000\tcreationpcrs=0x00800000\tpolicydigest=%s";
const struct cred *cred = current_cred();
struct tpm_digest *digests = NULL;
+ char policy[SHA256_DIGEST_SIZE];
+ char *policydigest = NULL;
+ int session_handle = -1;
struct key *key = NULL;
struct tpm_chip *chip;
+ char *keyinfo = NULL;
int ret, i;

chip = tpm_default_chip();
@@ -529,6 +639,28 @@ static int snapshot_create_kernel_key(struct snapshot_data *data)
if (ret != 0)
goto out;

+ policydigest = kmalloc(SHA256_DIGEST_SIZE * 2 + 1, GFP_KERNEL);
+ if (!policydigest) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ ret = tpm_setup_policy(chip, &session_handle);
+ if (ret != 0)
+ goto out;
+
+ ret = tpm_policy_get_digest(chip, session_handle, policy);
+ if (ret != 0)
+ goto out;
+
+ bin2hex(policydigest, policy, SHA256_DIGEST_SIZE);
+ policydigest[SHA256_DIGEST_SIZE * 2] = '\0';
+ keyinfo = kasprintf(GFP_KERNEL, keytemplate, policydigest);
+ if (!keyinfo) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
key = key_alloc(&key_type_trusted, "swsusp", GLOBAL_ROOT_UID,
GLOBAL_ROOT_GID, cred, 0, KEY_ALLOC_NOT_IN_QUOTA,
NULL);
@@ -539,7 +671,7 @@ static int snapshot_create_kernel_key(struct snapshot_data *data)
goto out;
}

- ret = key_instantiate_and_link(key, keyinfo, sizeof(keyinfo), NULL,
+ ret = key_instantiate_and_link(key, keyinfo, strlen(keyinfo) + 1, NULL,
NULL);
if (ret != 0)
goto out;
@@ -553,7 +685,16 @@ static int snapshot_create_kernel_key(struct snapshot_data *data)
key_put(key);
}

+ if (session_handle != -1) {
+ if (tpm_try_get_ops(chip) == 0) {
+ tpm2_flush_context(chip, session_handle);
+ tpm_put_ops(chip);
+ }
+ }
+
kfree(digests);
+ kfree(keyinfo);
+ kfree(policydigest);
tpm2_pcr_reset(chip, 23);

out_dev:
@@ -617,12 +758,13 @@ static int snapshot_load_kernel_key(struct snapshot_data *data,
struct uswsusp_key_blob *blob)
{

- char *keytemplate = "load\t%s\tkeyhandle=0x81000000";
+ char *keytemplate = "load\t%s\tkeyhandle=0x81000000\tpolicyhandle=0x%x";
const struct cred *cred = current_cred();
struct trusted_key_payload *payload;
char certhash[SHA256_DIGEST_SIZE];
struct tpm_digest *digests = NULL;
char *blobstring = NULL;
+ int session_handle = -1;
struct key *key = NULL;
struct tpm_chip *chip;
char *keyinfo = NULL;
@@ -658,14 +800,21 @@ static int snapshot_load_kernel_key(struct snapshot_data *data,
if (ret != 0)
goto out;

- blobstring = kmalloc(blob->blob_len * 2, GFP_KERNEL);
+ ret = tpm_setup_policy(chip, &session_handle);
+ if (ret != 0)
+ goto out;
+
+ blobstring = kmalloc(blob->blob_len * 2 + 1, GFP_KERNEL);
if (!blobstring) {
ret = -ENOMEM;
goto out;
}

bin2hex(blobstring, blob->blob, blob->blob_len);
- keyinfo = kasprintf(GFP_KERNEL, keytemplate, blobstring);
+ blobstring[blob->blob_len * 2] = '\0';
+ keyinfo = kasprintf(GFP_KERNEL, keytemplate, blobstring,
+ session_handle);
+
if (!keyinfo) {
ret = -ENOMEM;
goto out;
@@ -748,6 +897,13 @@ static int snapshot_load_kernel_key(struct snapshot_data *data,
key_put(key);
}

+ if (session_handle != -1) {
+ if (tpm_try_get_ops(chip) == 0) {
+ tpm2_flush_context(chip, session_handle);
+ tpm_put_ops(chip);
+ }
+ }
+
kfree(keyinfo);
kfree(blobstring);
kfree(digests);
--
2.38.1.431.g37b22c650d-goog


2022-11-04 18:54:11

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v4 03/11] tpm: Allow PCR 23 to be restricted to kernel-only use

On Thu, Nov 03, 2022 at 11:01:11AM -0700, Evan Green wrote:
> From: Matthew Garrett <[email protected]>
>
> Introduce a new Kconfig, TCG_TPM_RESTRICT_PCR, which if enabled
> restricts usermode's ability to extend or reset PCR 23.
>
> Under certain circumstances it might be desirable to enable the creation
> of TPM-backed secrets that are only accessible to the kernel. In an
> ideal world this could be achieved by using TPM localities, but these
> don't appear to be available on consumer systems. An alternative is to
> simply block userland from modifying one of the resettable PCRs, leaving
> it available to the kernel. If the kernel ensures that no userland can
> access the TPM while it is carrying out work, it can reset PCR 23,
> extend it to an arbitrary value, create or load a secret, and then reset
> the PCR again. Even if userland somehow obtains the sealed material, it
> will be unable to unseal it since PCR 23 will never be in the
> appropriate state.
>
> This Kconfig is only properly supported for systems with TPM2 devices.
> For systems with TPM1 devices, having this Kconfig enabled completely
> restricts usermode's access to the TPM. TPM1 contains support for
> tunnelled transports, which usermode could use to smuggle commands
> through that this Kconfig is attempting to restrict.
>
> Link: https://lore.kernel.org/lkml/[email protected]/
> Signed-off-by: Matthew Garrett <[email protected]>
> Signed-off-by: Evan Green <[email protected]>
> ---
>
> Changes in v4:
> - Augment the commit message (Jarkko)
>
> Changes in v3:
> - Fix up commit message (Jarkko)
> - tpm2_find_and_validate_cc() was split (Jarkko)
> - Simply fully restrict TPM1 since v2 failed to account for tunnelled
> transport sessions (Stefan and Jarkko).
>
> Changes in v2:
> - Fixed sparse warnings

Since you've changed this patch from the original, I would follow the
same advice I gave here:
https://lore.kernel.org/lkml/202209201620.A886373@keescook/

>
--
Kees Cook

2022-11-04 18:54:28

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v4 05/11] security: keys: trusted: Allow storage of PCR values in creation data

On Thu, Nov 03, 2022 at 11:01:13AM -0700, Evan Green wrote:
> From: Matthew Garrett <[email protected]>
>
> When TPMs generate keys, they can also generate some information
> describing the state of the PCRs at creation time. This data can then
> later be certified by the TPM, allowing verification of the PCR values.
> This allows us to determine the state of the system at the time a key
> was generated. Add an additional argument to the trusted key creation
> options, allowing the user to provide the set of PCRs that should have
> their values incorporated into the creation data.
>
> Link: https://lore.kernel.org/lkml/[email protected]/
> Signed-off-by: Matthew Garrett <[email protected]>

Reviewed-by: Kees Cook <[email protected]>

--
Kees Cook

2022-11-04 19:09:18

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v4 07/11] PM: hibernate: Add kernel-based encryption

On Thu, Nov 03, 2022 at 11:01:15AM -0700, Evan Green wrote:
> [...]
> +config ENCRYPTED_HIBERNATION
> + bool "Encryption support for userspace snapshots"
> + depends on HIBERNATION_SNAPSHOT_DEV
> + depends on CRYPTO_AEAD2=y
> + default n

"default n" is the, err, default, so this line can be left out.

If someone more familiar with the crypto pieces can review the rest,
that would be good. :)

--
Kees Cook

2022-11-04 19:11:42

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v4 06/11] security: keys: trusted: Verify creation data

On Thu, Nov 03, 2022 at 11:01:14AM -0700, Evan Green wrote:
> If a loaded key contains creation data, ask the TPM to verify that
> creation data. This allows users like encrypted hibernate to know that
> the loaded and parsed creation data has not been tampered with.
>
> Suggested-by: Matthew Garrett <[email protected]>
> Signed-off-by: Evan Green <[email protected]>

Reviewed-by: Kees Cook <[email protected]>

--
Kees Cook

2022-11-04 19:13:08

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v4 08/11] PM: hibernate: Use TPM-backed keys to encrypt image

On Thu, Nov 03, 2022 at 11:01:16AM -0700, Evan Green wrote:
> When using encrypted hibernate images, have the TPM create a key for us
> and seal it. By handing back a sealed blob instead of the raw key, we
> prevent usermode from being able to decrypt and tamper with the
> hibernate image on a different machine.
>
> We'll also go through the motions of having PCR23 set to a known value at
> the time of key creation and unsealing. Currently there's nothing that
> enforces the contents of PCR23 as a condition to unseal the key blob,
> that will come in a later change.
>
> Sourced-from: Matthew Garrett <[email protected]>

I'd say Suggested-by. "Source-from:" is not a tag that has ever been
used before. :)

Otherwise, looks good.

Reviewed-by: Kees Cook <[email protected]>

--
Kees Cook

2022-11-04 19:16:33

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v4 09/11] PM: hibernate: Mix user key in encrypted hibernate

On Thu, Nov 03, 2022 at 11:01:17AM -0700, Evan Green wrote:
> Usermode may have their own data protection requirements when it comes
> to encrypting the hibernate image. For example, users may want a policy
> where the hibernate image is protected by a key derived both from
> platform-level security as well as authentication data (such as a
> password or PIN). This way, even if the platform is compromised (ie a
> stolen laptop), sensitive data cannot be exfiltrated via the hibernate
> image without additional data (like the user's password).
>
> The kernel is already doing the encryption, but will be protecting its
> key with the TPM alone. Allow usermode to mix in key content of their own
> for the data portion of the hibernate image, so that the image
> encryption key is determined both by a TPM-backed secret and
> user-defined data.
>
> To mix the user key in, we hash the kernel key followed by the user key,
> and use the resulting hash as the new key. This allows usermode to mix
> in its key material without giving it too much control over what key is
> actually driving the encryption (which might be used to attack the
> secret kernel key).
>
> Limiting this to the data portion allows the kernel to receive the page
> map and prepare its giant allocation even if this user key is not yet
> available (ie the user has not yet finished typing in their password).
> Once the user key becomes available, the data portion can be pushed
> through to the kernel as well. This enables "preloading" scenarios,
> where the hibernate image is loaded off of disk while the additional
> key material (eg password) is being collected.
>
> One annoyance of the "preloading" scheme is that hibernate image memory
> is effectively double-allocated: first by the usermode process pulling
> encrypted contents off of disk and holding it, and second by the kernel
> in its giant allocation in prepare_image(). An interesting future
> optimization would be to allow the kernel to accept and store encrypted
> page data before the user key is available. This would remove the
> double allocation problem, as usermode could push the encrypted pages
> loaded from disk immediately without storing them. The kernel could defer
> decryption of the data until the user key is available, while still
> knowing the correct page locations to store the encrypted data in.
>
> Signed-off-by: Evan Green <[email protected]>
> ---
>
> (no changes since v2)
>
> Changes in v2:
> - Add missing static on snapshot_encrypted_byte_count()
> - Fold in only the used kernel key bytes to the user key.
> - Make the user key length 32 (Eric)
> - Use CRYPTO_LIB_SHA256 for less boilerplate (Eric)
>
> include/uapi/linux/suspend_ioctls.h | 15 ++-
> kernel/power/Kconfig | 1 +
> kernel/power/power.h | 1 +
> kernel/power/snapenc.c | 158 ++++++++++++++++++++++++++--
> kernel/power/snapshot.c | 5 +
> kernel/power/user.c | 4 +
> kernel/power/user.h | 12 +++
> 7 files changed, 185 insertions(+), 11 deletions(-)
>
> diff --git a/include/uapi/linux/suspend_ioctls.h b/include/uapi/linux/suspend_ioctls.h
> index b73026ef824bb9..f93a22eac52dc2 100644
> --- a/include/uapi/linux/suspend_ioctls.h
> +++ b/include/uapi/linux/suspend_ioctls.h
> @@ -25,6 +25,18 @@ struct uswsusp_key_blob {
> __u8 nonce[USWSUSP_KEY_NONCE_SIZE];
> } __attribute__((packed));
>
> +/*
> + * Allow user mode to fold in key material for the data portion of the hibernate
> + * image.
> + */
> +struct uswsusp_user_key {
> + /* Kernel returns the metadata size. */
> + __kernel_loff_t meta_size;
> + __u32 key_len;
> + __u8 key[32];

Why is this 32? (Is there a non-literal we can put here?)

> + __u32 pad;

And why the pad?

> +};
> +
> #define SNAPSHOT_IOC_MAGIC '3'
> #define SNAPSHOT_FREEZE _IO(SNAPSHOT_IOC_MAGIC, 1)
> #define SNAPSHOT_UNFREEZE _IO(SNAPSHOT_IOC_MAGIC, 2)
> @@ -42,6 +54,7 @@ struct uswsusp_key_blob {
> #define SNAPSHOT_AVAIL_SWAP_SIZE _IOR(SNAPSHOT_IOC_MAGIC, 19, __kernel_loff_t)
> #define SNAPSHOT_ALLOC_SWAP_PAGE _IOR(SNAPSHOT_IOC_MAGIC, 20, __kernel_loff_t)
> #define SNAPSHOT_ENABLE_ENCRYPTION _IOWR(SNAPSHOT_IOC_MAGIC, 21, struct uswsusp_key_blob)
> -#define SNAPSHOT_IOC_MAXNR 21
> +#define SNAPSHOT_SET_USER_KEY _IOWR(SNAPSHOT_IOC_MAGIC, 22, struct uswsusp_user_key)
> +#define SNAPSHOT_IOC_MAXNR 22
>
> #endif /* _LINUX_SUSPEND_IOCTLS_H */
> diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
> index 2f8acbd87b34dc..35bf48b925ebf6 100644
> --- a/kernel/power/Kconfig
> +++ b/kernel/power/Kconfig
> @@ -97,6 +97,7 @@ config ENCRYPTED_HIBERNATION
> depends on HIBERNATION_SNAPSHOT_DEV
> depends on CRYPTO_AEAD2=y
> depends on TRUSTED_KEYS=y
> + select CRYPTO_LIB_SHA256
> default n
> help
> Enable support for kernel-based encryption of hibernation snapshots
> diff --git a/kernel/power/power.h b/kernel/power/power.h
> index b4f43394320961..5955e5cf692302 100644
> --- a/kernel/power/power.h
> +++ b/kernel/power/power.h
> @@ -151,6 +151,7 @@ struct snapshot_handle {
>
> extern unsigned int snapshot_additional_pages(struct zone *zone);
> extern unsigned long snapshot_get_image_size(void);
> +extern unsigned long snapshot_get_meta_page_count(void);
> extern int snapshot_read_next(struct snapshot_handle *handle);
> extern int snapshot_write_next(struct snapshot_handle *handle);
> extern void snapshot_write_finalize(struct snapshot_handle *handle);
> diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c
> index 7ff4fc66f7500c..50167a37c5bf23 100644
> --- a/kernel/power/snapenc.c
> +++ b/kernel/power/snapenc.c
> @@ -6,6 +6,7 @@
> #include <crypto/gcm.h>
> #include <keys/trusted-type.h>
> #include <linux/key-type.h>
> +#include <crypto/sha.h>
> #include <linux/random.h>
> #include <linux/mm.h>
> #include <linux/tpm.h>
> @@ -21,6 +22,38 @@ static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256,
> 0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05,
> 0x5f, 0x49}};
>
> +/* Derive a key from the kernel and user keys for data encryption. */
> +static int snapshot_use_user_key(struct snapshot_data *data)
> +{
> + u8 digest[SHA256_DIGEST_SIZE];
> + struct trusted_key_payload *payload = data->key->payload.data[0];
> + struct sha256_state sha256_state;
> +
> + /*
> + * Hash the kernel key and the user key together. This folds in the user
> + * key, but not in a way that gives the user mode predictable control
> + * over the key bits.
> + */
> + sha256_init(&sha256_state);
> + sha256_update(&sha256_state, payload->key, SNAPSHOT_ENCRYPTION_KEY_SIZE);
> + sha256_update(&sha256_state, data->user_key, sizeof(data->user_key));
> + sha256_final(&sha256_state, digest);
> + return crypto_aead_setkey(data->aead_tfm,
> + digest,
> + SNAPSHOT_ENCRYPTION_KEY_SIZE);
> +}
> +
> +/* Check to see if it's time to switch to the user key, and do it if so. */
> +static int snapshot_check_user_key_switch(struct snapshot_data *data)
> +{
> + if (data->user_key_valid && data->meta_size &&
> + data->crypt_total == data->meta_size) {
> + return snapshot_use_user_key(data);
> + }
> +
> + return 0;
> +}
> +
> /* Encrypt more data from the snapshot into the staging area. */
> static int snapshot_encrypt_refill(struct snapshot_data *data)
> {
> @@ -32,6 +65,15 @@ static int snapshot_encrypt_refill(struct snapshot_data *data)
> int pg_idx;
> int res;
>
> + if (data->crypt_total == 0) {
> + data->meta_size = snapshot_get_meta_page_count() << PAGE_SHIFT;
> +
> + } else {
> + res = snapshot_check_user_key_switch(data);
> + if (res)
> + return res;
> + }
> +
> /*
> * The first buffer is the associated data, set to the offset to prevent
> * attacks that rearrange chunks.
> @@ -42,6 +84,11 @@ static int snapshot_encrypt_refill(struct snapshot_data *data)
> for (pg_idx = 0; pg_idx < CHUNK_SIZE; pg_idx++) {
> void *buf = data->crypt_pages[pg_idx];
>
> + /* Stop at the meta page boundary to potentially switch keys. */
> + if (total &&
> + ((data->crypt_total + total) == data->meta_size))
> + break;
> +
> res = snapshot_read_next(&data->handle);
> if (res < 0)
> return res;
> @@ -114,10 +161,10 @@ static int snapshot_decrypt_drain(struct snapshot_data *data)
> sg_set_buf(&data->sg[1 + pg_idx], data->crypt_pages[pg_idx], PAGE_SIZE);
>
> /*
> - * It's possible this is the final decrypt, and there are fewer than
> - * CHUNK_SIZE pages. If this is the case we would have just written the
> - * auth tag into the first few bytes of a new page. Copy to the tag if
> - * so.
> + * It's possible this is the final decrypt, or the final decrypt of the
> + * meta region, and there are fewer than CHUNK_SIZE pages. If this is
> + * the case we would have just written the auth tag into the first few
> + * bytes of a new page. Copy to the tag if so.
> */
> if ((page_count < CHUNK_SIZE) &&
> (data->crypt_offset - total) == sizeof(data->auth_tag)) {
> @@ -172,7 +219,14 @@ static int snapshot_decrypt_drain(struct snapshot_data *data)
> total += PAGE_SIZE;
> }
>
> + if (data->crypt_total == 0)
> + data->meta_size = snapshot_get_meta_page_count() << PAGE_SHIFT;
> +
> data->crypt_total += total;
> + res = snapshot_check_user_key_switch(data);
> + if (res)
> + return res;
> +
> return 0;
> }
>
> @@ -221,8 +275,26 @@ static ssize_t snapshot_write_next_encrypted(struct snapshot_data *data,
> if (data->crypt_offset < (PAGE_SIZE * CHUNK_SIZE)) {
> size_t pg_idx = data->crypt_offset >> PAGE_SHIFT;
> size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1);
> + size_t size_avail = PAGE_SIZE;
> *buf = data->crypt_pages[pg_idx] + pg_off;
> - return PAGE_SIZE - pg_off;
> +
> + /*
> + * If this is the boundary where the meta pages end, then just
> + * return enough for the auth tag.
> + */
> + if (data->meta_size && (data->crypt_total < data->meta_size)) {
> + uint64_t total_done =
> + data->crypt_total + data->crypt_offset;
> +
> + if ((total_done >= data->meta_size) &&
> + (total_done <
> + (data->meta_size + SNAPSHOT_AUTH_TAG_SIZE))) {
> +
> + size_avail = SNAPSHOT_AUTH_TAG_SIZE;
> + }
> + }
> +
> + return size_avail - pg_off;
> }
>
> /* Use offsets just beyond the size to return the tag. */
> @@ -304,9 +376,15 @@ ssize_t snapshot_write_encrypted(struct snapshot_data *data,
> break;
> }
>
> - /* Drain the encrypted buffer if it's full. */
> + /*
> + * Drain the encrypted buffer if it's full, or if we hit the end
> + * of the meta pages and need a key change.
> + */
> if ((data->crypt_offset >=
> - ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE))) {
> + ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE)) ||
> + (data->meta_size && (data->crypt_total < data->meta_size) &&
> + ((data->crypt_total + data->crypt_offset) ==
> + (data->meta_size + SNAPSHOT_AUTH_TAG_SIZE)))) {
>
> int rc;
>
> @@ -350,6 +428,8 @@ void snapshot_teardown_encryption(struct snapshot_data *data)
> data->crypt_pages[i] = NULL;
> }
> }
> +
> + memset(data->user_key, 0, sizeof(data->user_key));
> }
>
> static int snapshot_setup_encryption_common(struct snapshot_data *data)
> @@ -359,6 +439,7 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data)
> data->crypt_total = 0;
> data->crypt_offset = 0;
> data->crypt_size = 0;
> + data->user_key_valid = false;
> memset(data->crypt_pages, 0, sizeof(data->crypt_pages));
> /* This only works once per hibernate. */
> if (data->aead_tfm)
> @@ -661,15 +742,72 @@ int snapshot_set_encryption_key(struct snapshot_data *data,
> return rc;
> }
>
> -loff_t snapshot_get_encrypted_image_size(loff_t raw_size)
> +static loff_t snapshot_encrypted_byte_count(loff_t plain_size)
> {
> - loff_t pages = raw_size >> PAGE_SHIFT;
> + loff_t pages = plain_size >> PAGE_SHIFT;
> loff_t chunks = (pages + (CHUNK_SIZE - 1)) / CHUNK_SIZE;
> /*
> * The encrypted size is the normal size, plus a stitched in
> * authentication tag for every chunk of pages.
> */
> - return raw_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE);
> + return plain_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE);
> +}
> +
> +static loff_t snapshot_get_meta_data_size(void)
> +{
> + loff_t pages = snapshot_get_meta_page_count();
> +
> + return snapshot_encrypted_byte_count(pages << PAGE_SHIFT);
> +}
> +
> +int snapshot_set_user_key(struct snapshot_data *data,
> + struct uswsusp_user_key __user *key)
> +{
> + struct uswsusp_user_key user_key;
> + unsigned int key_len;
> + int rc;
> + loff_t size;
> +
> + /*
> + * Return the metadata size, the number of bytes that can be fed in before
> + * the user data key is needed at resume time.
> + */
> + size = snapshot_get_meta_data_size();
> + rc = put_user(size, &key->meta_size);
> + if (rc)
> + return rc;
> +
> + rc = copy_from_user(&user_key, key, sizeof(struct uswsusp_user_key));
> + if (rc)
> + return rc;
> +
> + key_len = min_t(__u32, user_key.key_len, sizeof(data->user_key));
> + if (key_len < 8)
> + return -EINVAL;
> +
> + /* Don't allow it if it's too late. */
> + if (data->crypt_total > data->meta_size)
> + return -EBUSY;
> +
> + memset(data->user_key, 0, sizeof(data->user_key));
> + memcpy(data->user_key, user_key.key, key_len);

Is struct snapshot_data::user_key is supposed to be %NUL terminated? Or
is it just 0-padded up to 32 bytes? If the latter, it might be worth
marking struct snapshot_data::user_data with the __non_string attribute.

I don't like the dissociation of struct uswsusp_user_key::user_key and
struct snapshot_data::user_key, since a mistake here can lead to copying
kernel memory into struct snapshot_data::user_key. It would be nice to
see something like:

BUILD_BUG_ON(sizeof(data->user_key) < sizeof(user_key.key));

--
Kees Cook

2022-11-04 19:39:45

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v4 10/11] PM: hibernate: Verify the digest encryption key

On Thu, Nov 03, 2022 at 11:01:18AM -0700, Evan Green wrote:
> We want to ensure that the key used to encrypt the digest was created by
> the kernel during hibernation. To do this we request that the TPM
> include information about the value of PCR 23 at the time of key
> creation in the sealed blob. On resume, we can make sure that the PCR
> information in the creation data blob (already certified by the TPM to
> be accurate) corresponds to the expected value. Since only
> the kernel can touch PCR 23, if an attacker generates a key themselves
> the value of PCR 23 will have been different, allowing us to reject the
> key and boot normally instead of resuming.
>
> Co-developed-by: Matthew Garrett <[email protected]>
> Signed-off-by: Matthew Garrett <[email protected]>
> Signed-off-by: Evan Green <[email protected]>
>
> ---
> Matthew's original version of this patch is here:
> https://patchwork.kernel.org/project/linux-pm/patch/[email protected]/
>
> I moved the TPM2_CC_CERTIFYCREATION code into a separate change in the
> trusted key code because the blob_handle was being flushed and was no
> longer valid for use in CC_CERTIFYCREATION after the key was loaded. As
> an added benefit of moving the certification into the trusted keys code,
> we can drop the other patch from the original series that squirrelled
> the blob_handle away.
>
> Changes in v4:
> - Local variable reordering (Jarkko)
>
> Changes in v3:
> - Changed funky tag to Co-developed-by (Kees). Matthew, holler if you
> want something different.
>
> Changes in v2:
> - Fixed some sparse warnings
> - Use CRYPTO_LIB_SHA256 to get rid of sha256_data() (Eric)
> - Adjusted offsets due to new ASN.1 format, and added a creation data
> length check.
>
> kernel/power/snapenc.c | 67 ++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 65 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c
> index 50167a37c5bf23..2f421061498246 100644
> --- a/kernel/power/snapenc.c
> +++ b/kernel/power/snapenc.c
> @@ -22,6 +22,12 @@ static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256,
> 0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05,
> 0x5f, 0x49}};
>
> +/* sha256(sha256(empty_pcr | known_digest)) */
> +static const char expected_digest[] = {0x2f, 0x96, 0xf2, 0x1b, 0x70, 0xa9, 0xe8,
> + 0x42, 0x25, 0x8e, 0x66, 0x07, 0xbe, 0xbc, 0xe3, 0x1f, 0x2c, 0x84, 0x4a,
> + 0x3f, 0x85, 0x17, 0x31, 0x47, 0x9a, 0xa5, 0x53, 0xbb, 0x23, 0x0c, 0x32,
> + 0xf3};
> +
> /* Derive a key from the kernel and user keys for data encryption. */
> static int snapshot_use_user_key(struct snapshot_data *data)
> {
> @@ -486,7 +492,7 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data)
> static int snapshot_create_kernel_key(struct snapshot_data *data)
> {
> /* Create a key sealed by the SRK. */
> - char *keyinfo = "new\t32\tkeyhandle=0x81000000";
> + char *keyinfo = "new\t32\tkeyhandle=0x81000000\tcreationpcrs=0x00800000";
> const struct cred *cred = current_cred();
> struct tpm_digest *digests = NULL;
> struct key *key = NULL;
> @@ -613,6 +619,8 @@ static int snapshot_load_kernel_key(struct snapshot_data *data,
>
> char *keytemplate = "load\t%s\tkeyhandle=0x81000000";
> const struct cred *cred = current_cred();
> + struct trusted_key_payload *payload;
> + char certhash[SHA256_DIGEST_SIZE];
> struct tpm_digest *digests = NULL;
> char *blobstring = NULL;
> struct key *key = NULL;
> @@ -635,8 +643,10 @@ static int snapshot_load_kernel_key(struct snapshot_data *data,
>
> digests = kcalloc(chip->nr_allocated_banks, sizeof(struct tpm_digest),
> GFP_KERNEL);
> - if (!digests)
> + if (!digests) {
> + ret = -ENOMEM;
> goto out;
> + }
>
> for (i = 0; i < chip->nr_allocated_banks; i++) {
> digests[i].alg_id = chip->allocated_banks[i].alg_id;
> @@ -676,6 +686,59 @@ static int snapshot_load_kernel_key(struct snapshot_data *data,
> if (ret != 0)
> goto out;
>
> + /* Verify the creation hash matches the creation data. */
> + payload = key->payload.data[0];
> + if (!payload->creation || !payload->creation_hash ||
> + (payload->creation_len < 3) ||

Later accesses are reaching into indexes, 6, 8, 12, 14, etc. Shouldn't
this test be:

(payload->creation_len < 14 + SHA256_DIGEST_SIZE) ||


> + (payload->creation_hash_len < SHA256_DIGEST_SIZE)) {
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + sha256(payload->creation + 2, payload->creation_len - 2, certhash);

Why +2 offset?

> + if (memcmp(payload->creation_hash + 2, certhash, SHA256_DIGEST_SIZE) != 0) {

And if this is +2 also, shouldn't the earlier test be:

(payload->creation_hash_len - 2 != SHA256_DIGEST_SIZE)) {

?

> + if (be32_to_cpu(*(__be32 *)&payload->creation[2]) != 1) {
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + if (be16_to_cpu(*(__be16 *)&payload->creation[6]) != TPM_ALG_SHA256) {
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + if (*(char *)&payload->creation[8] != 3) {
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + /* PCR 23 selected */
> + if (be32_to_cpu(*(__be32 *)&payload->creation[8]) != 0x03000080) {
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + if (be16_to_cpu(*(__be16 *)&payload->creation[12]) !=
> + SHA256_DIGEST_SIZE) {
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + /* Verify PCR 23 contained the expected value when the key was created. */
> + if (memcmp(&payload->creation[14], expected_digest,
> + SHA256_DIGEST_SIZE) != 0) {

These various literals (2, 6, 8, 3, 8, 0x03000080, 12, 14) should be
explicit #defines so their purpose/meaning is more clear.

I can guess at it, but better to avoid the guessing. :)

> +
> + ret = -EINVAL;
> + goto out;
> + }
> +
> data->key = key;
> key = NULL;
>
> --
> 2.38.1.431.g37b22c650d-goog
>

--
Kees Cook

2022-11-07 12:02:48

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v4 03/11] tpm: Allow PCR 23 to be restricted to kernel-only use

On Thu, Nov 03, 2022 at 11:01:11AM -0700, Evan Green wrote:
> From: Matthew Garrett <[email protected]>
>
> Introduce a new Kconfig, TCG_TPM_RESTRICT_PCR, which if enabled
> restricts usermode's ability to extend or reset PCR 23.
>
> Under certain circumstances it might be desirable to enable the creation
> of TPM-backed secrets that are only accessible to the kernel. In an
> ideal world this could be achieved by using TPM localities, but these
> don't appear to be available on consumer systems. An alternative is to
> simply block userland from modifying one of the resettable PCRs, leaving
> it available to the kernel. If the kernel ensures that no userland can
> access the TPM while it is carrying out work, it can reset PCR 23,
> extend it to an arbitrary value, create or load a secret, and then reset
> the PCR again. Even if userland somehow obtains the sealed material, it
> will be unable to unseal it since PCR 23 will never be in the
> appropriate state.
>
> This Kconfig is only properly supported for systems with TPM2 devices.
> For systems with TPM1 devices, having this Kconfig enabled completely
> restricts usermode's access to the TPM. TPM1 contains support for
> tunnelled transports, which usermode could use to smuggle commands
> through that this Kconfig is attempting to restrict.
>
> Link: https://lore.kernel.org/lkml/[email protected]/
> Signed-off-by: Matthew Garrett <[email protected]>
> Signed-off-by: Evan Green <[email protected]>
> ---
>
> Changes in v4:
> - Augment the commit message (Jarkko)
>
> Changes in v3:
> - Fix up commit message (Jarkko)
> - tpm2_find_and_validate_cc() was split (Jarkko)
> - Simply fully restrict TPM1 since v2 failed to account for tunnelled
> transport sessions (Stefan and Jarkko).
>
> Changes in v2:
> - Fixed sparse warnings
>
> drivers/char/tpm/Kconfig | 12 ++++++++++++
> drivers/char/tpm/tpm-dev-common.c | 8 ++++++++
> drivers/char/tpm/tpm.h | 19 +++++++++++++++++++
> drivers/char/tpm/tpm1-cmd.c | 13 +++++++++++++
> drivers/char/tpm/tpm2-cmd.c | 22 ++++++++++++++++++++++
> 5 files changed, 74 insertions(+)
>
> diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
> index 927088b2c3d3f2..c8ed54c66e399a 100644
> --- a/drivers/char/tpm/Kconfig
> +++ b/drivers/char/tpm/Kconfig
> @@ -211,4 +211,16 @@ config TCG_FTPM_TEE
> This driver proxies for firmware TPM running in TEE.
>
> source "drivers/char/tpm/st33zp24/Kconfig"
> +
> +config TCG_TPM_RESTRICT_PCR
> + bool "Restrict userland access to PCR 23"
> + depends on TCG_TPM
> + help
> + If set, block userland from extending or resetting PCR 23. This allows it
> + to be restricted to in-kernel use, preventing userland from being able to
> + make use of data sealed to the TPM by the kernel. This is required for
> + secure hibernation support, but should be left disabled if any userland
> + may require access to PCR23. This is a TPM2-only feature, and if enabled
> + on a TPM1 machine will cause all usermode TPM commands to return EPERM due
> + to the complications introduced by tunnelled sessions in TPM1.2.
> endif # TCG_TPM
> diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
> index dc4c0a0a512903..7a4e618c7d1942 100644
> --- a/drivers/char/tpm/tpm-dev-common.c
> +++ b/drivers/char/tpm/tpm-dev-common.c
> @@ -198,6 +198,14 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf,
> priv->response_read = false;
> *off = 0;
>
> + if (priv->chip->flags & TPM_CHIP_FLAG_TPM2)
> + ret = tpm2_cmd_restricted(priv->chip, priv->data_buffer, size);
> + else
> + ret = tpm1_cmd_restricted(priv->chip, priv->data_buffer, size);
> +
> + if (ret)
> + goto out;
> +
> /*
> * If in nonblocking mode schedule an async job to send
> * the command return the size.
> diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
> index f1e0f490176f01..c0845e3f9eda17 100644
> --- a/drivers/char/tpm/tpm.h
> +++ b/drivers/char/tpm/tpm.h
> @@ -245,4 +245,23 @@ void tpm_bios_log_setup(struct tpm_chip *chip);
> void tpm_bios_log_teardown(struct tpm_chip *chip);
> int tpm_dev_common_init(void);
> void tpm_dev_common_exit(void);
> +
> +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> +#define TPM_RESTRICTED_PCR 23
> +
> +int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size);
> +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size);
> +#else
> +static inline int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer,
> + size_t size)
> +{
> + return 0;
> +}
> +
> +static inline int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer,
> + size_t size)
> +{
> + return 0;
> +}
> +#endif
> #endif
> diff --git a/drivers/char/tpm/tpm1-cmd.c b/drivers/char/tpm/tpm1-cmd.c
> index cf64c738510529..1869e89215fcb9 100644
> --- a/drivers/char/tpm/tpm1-cmd.c
> +++ b/drivers/char/tpm/tpm1-cmd.c
> @@ -811,3 +811,16 @@ int tpm1_get_pcr_allocation(struct tpm_chip *chip)
>
> return 0;
> }
> +
> +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> +int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size)
> +{
> + /*
> + * Restrict all usermode commands on TPM1.2. Ideally we'd just restrict
> + * TPM_ORD_PCR_EXTEND and TPM_ORD_PCR_RESET, but TPM1.2 also supports
> + * tunnelled transport sessions where the kernel would be unable to filter
> + * commands.
> + */
> + return -EPERM;
> +}
> +#endif
> diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
> index 303ce2ea02a4b0..e0503cfd7bcfee 100644
> --- a/drivers/char/tpm/tpm2-cmd.c
> +++ b/drivers/char/tpm/tpm2-cmd.c
> @@ -778,3 +778,25 @@ int tpm2_find_cc(struct tpm_chip *chip, u32 cc)
>
> return -1;
> }
> +
> +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size)
> +{
> + int cc = tpm2_find_and_validate_cc(chip, NULL, buffer, size);
> + __be32 *handle;
> +
> + switch (cc) {
> + case TPM2_CC_PCR_EXTEND:
> + case TPM2_CC_PCR_RESET:
> + if (size < (TPM_HEADER_SIZE + sizeof(u32)))
> + return -EINVAL;
> +
> + handle = (__be32 *)&buffer[TPM_HEADER_SIZE];
> + if (be32_to_cpu(*handle) == TPM_RESTRICTED_PCR)
> + return -EPERM;
> + break;
> + }
> +
> + return 0;
> +}
> +#endif
> --
> 2.38.1.431.g37b22c650d-goog
>

This looks otherwise good but I have still one remark: what is the reason
for restricting PCR23 for TPM 1.x?

BR, Jarkko


2022-11-07 12:58:09

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v4 07/11] PM: hibernate: Add kernel-based encryption

On Thu, Nov 03, 2022 at 11:01:15AM -0700, Evan Green wrote:
> Enabling the kernel to be able to do encryption and integrity checks on
> the hibernate image prevents a malicious userspace from escalating to
> kernel execution via hibernation resume. As a first step toward this, add
> the scaffolding needed for the kernel to do AEAD encryption on the
> hibernate image, giving us both secrecy and integrity.
>
> We currently hardwire the encryption to be gcm(aes) in 16-page chunks.
> This strikes a balance between minimizing the authentication tag
> overhead on storage, and keeping a modest sized staging buffer. With
> this chunk size, we'd generate 2MB of authentication tag data on an 8GB
> hiberation image.
>
> The encryption currently sits on top of the core snapshot functionality,
> wired up only if requested in the uswsusp path. This could potentially

User Space Software Suspend?

I'd also open up briefly a bit what is uswsup path that gets wired up.

> be lowered into the common snapshot code given a mechanism to stitch the
> key contents into the image itself.
>
> To avoid forcing usermode to deal with sequencing the auth tags in with
> the data, we stitch the auth tags in to the snapshot after each chunk of
> pages. This complicates the read and write functions, as we roll through
> the flow of (for read) 1) fill the staging buffer with encrypted data,
> 2) feed the data pages out to user mode, 3) feed the tag out to user
> mode. To avoid having each syscall return a small and variable amount
> of data, the encrypted versions of read and write operate in a loop,
> allowing an arbitrary amount of data through per syscall.
>
> One alternative that would simplify things here would be a streaming
> interface to AEAD. Then we could just stream the entire hibernate image
> through directly, and handle a single tag at the end. However there is a
> school of thought that suggests a streaming interface to AEAD represents
> a loaded footgun, as it tempts the caller to act on the decrypted but
> not yet verified data, defeating the purpose of AEAD.
>
> With this change alone, we don't actually protect ourselves from
> malicious userspace at all, since we kindly hand the key in plaintext
> to usermode. In later changes, we'll seal the key with the TPM
> before handing it back to usermode, so they can't decrypt or tamper with
> the key themselves.
>
> Signed-off-by: Evan Green <[email protected]>
> ---
>
> Changes in v4:
> - Local ordering and whitespace changes (Jarkko)
>
> Documentation/power/userland-swsusp.rst | 8 +
> include/uapi/linux/suspend_ioctls.h | 15 +-
> kernel/power/Kconfig | 13 +
> kernel/power/Makefile | 1 +
> kernel/power/snapenc.c | 493 ++++++++++++++++++++++++
> kernel/power/user.c | 40 +-
> kernel/power/user.h | 103 +++++
> 7 files changed, 661 insertions(+), 12 deletions(-)
> create mode 100644 kernel/power/snapenc.c
> create mode 100644 kernel/power/user.h
>
> diff --git a/Documentation/power/userland-swsusp.rst b/Documentation/power/userland-swsusp.rst
> index 1cf62d80a9ca10..f759915a78ce98 100644
> --- a/Documentation/power/userland-swsusp.rst
> +++ b/Documentation/power/userland-swsusp.rst
> @@ -115,6 +115,14 @@ SNAPSHOT_S2RAM
> to resume the system from RAM if there's enough battery power or restore
> its state on the basis of the saved suspend image otherwise)
>
> +SNAPSHOT_ENABLE_ENCRYPTION
> + Enables encryption of the hibernate image within the kernel. Upon suspend
> + (ie when the snapshot device was opened for reading), returns a blob
> + representing the random encryption key the kernel created to encrypt the
> + hibernate image with. Upon resume (ie when the snapshot device was opened
> + for writing), receives a blob from usermode containing the key material
> + previously returned during hibernate.
> +
> The device's read() operation can be used to transfer the snapshot image from
> the kernel. It has the following limitations:
>
> diff --git a/include/uapi/linux/suspend_ioctls.h b/include/uapi/linux/suspend_ioctls.h
> index bcce04e21c0dce..b73026ef824bb9 100644
> --- a/include/uapi/linux/suspend_ioctls.h
> +++ b/include/uapi/linux/suspend_ioctls.h
> @@ -13,6 +13,18 @@ struct resume_swap_area {
> __u32 dev;
> } __attribute__((packed));
>
> +#define USWSUSP_KEY_NONCE_SIZE 16
> +
> +/*
> + * This structure is used to pass the kernel's hibernate encryption key in
> + * either direction.
> + */
> +struct uswsusp_key_blob {
> + __u32 blob_len;
> + __u8 blob[512];
> + __u8 nonce[USWSUSP_KEY_NONCE_SIZE];
> +} __attribute__((packed));
> +
> #define SNAPSHOT_IOC_MAGIC '3'
> #define SNAPSHOT_FREEZE _IO(SNAPSHOT_IOC_MAGIC, 1)
> #define SNAPSHOT_UNFREEZE _IO(SNAPSHOT_IOC_MAGIC, 2)
> @@ -29,6 +41,7 @@ struct resume_swap_area {
> #define SNAPSHOT_PREF_IMAGE_SIZE _IO(SNAPSHOT_IOC_MAGIC, 18)
> #define SNAPSHOT_AVAIL_SWAP_SIZE _IOR(SNAPSHOT_IOC_MAGIC, 19, __kernel_loff_t)
> #define SNAPSHOT_ALLOC_SWAP_PAGE _IOR(SNAPSHOT_IOC_MAGIC, 20, __kernel_loff_t)
> -#define SNAPSHOT_IOC_MAXNR 20
> +#define SNAPSHOT_ENABLE_ENCRYPTION _IOWR(SNAPSHOT_IOC_MAGIC, 21, struct uswsusp_key_blob)
> +#define SNAPSHOT_IOC_MAXNR 21
>
> #endif /* _LINUX_SUSPEND_IOCTLS_H */
> diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
> index 60a1d3051cc79a..cd574af0b43379 100644
> --- a/kernel/power/Kconfig
> +++ b/kernel/power/Kconfig
> @@ -92,6 +92,19 @@ config HIBERNATION_SNAPSHOT_DEV
>
> If in doubt, say Y.
>
> +config ENCRYPTED_HIBERNATION
> + bool "Encryption support for userspace snapshots"
> + depends on HIBERNATION_SNAPSHOT_DEV
> + depends on CRYPTO_AEAD2=y
> + default n
> + help
> + Enable support for kernel-based encryption of hibernation snapshots
> + created by uswsusp tools.
> +
> + Say N if userspace handles the image encryption.
> +
> + If in doubt, say N.
> +
> config PM_STD_PARTITION
> string "Default resume partition"
> depends on HIBERNATION
> diff --git a/kernel/power/Makefile b/kernel/power/Makefile
> index 874ad834dc8daf..7be08f2e0e3b68 100644
> --- a/kernel/power/Makefile
> +++ b/kernel/power/Makefile
> @@ -16,6 +16,7 @@ obj-$(CONFIG_SUSPEND) += suspend.o
> obj-$(CONFIG_PM_TEST_SUSPEND) += suspend_test.o
> obj-$(CONFIG_HIBERNATION) += hibernate.o snapshot.o swap.o
> obj-$(CONFIG_HIBERNATION_SNAPSHOT_DEV) += user.o
> +obj-$(CONFIG_ENCRYPTED_HIBERNATION) += snapenc.o
> obj-$(CONFIG_PM_AUTOSLEEP) += autosleep.o
> obj-$(CONFIG_PM_WAKELOCKS) += wakelock.o
>
> diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c
> new file mode 100644
> index 00000000000000..f215df16dad4d3
> --- /dev/null
> +++ b/kernel/power/snapenc.c
> @@ -0,0 +1,493 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* This file provides encryption support for system snapshots. */
> +
> +#include <linux/crypto.h>
> +#include <crypto/aead.h>
> +#include <crypto/gcm.h>
> +#include <linux/random.h>
> +#include <linux/mm.h>
> +#include <linux/uaccess.h>
> +
> +#include "power.h"
> +#include "user.h"
> +
> +/* Encrypt more data from the snapshot into the staging area. */
> +static int snapshot_encrypt_refill(struct snapshot_data *data)
> +{
> +
> + struct aead_request *req = data->aead_req;
> + u8 nonce[GCM_AES_IV_SIZE];
> + DECLARE_CRYPTO_WAIT(wait);
> + size_t total = 0;
> + int pg_idx;
> + int res;
> +
> + /*
> + * The first buffer is the associated data, set to the offset to prevent
> + * attacks that rearrange chunks.
> + */
> + sg_set_buf(&data->sg[0], &data->crypt_total, sizeof(data->crypt_total));
> +
> + /* Load the crypt buffer with snapshot pages. */
> + for (pg_idx = 0; pg_idx < CHUNK_SIZE; pg_idx++) {
> + void *buf = data->crypt_pages[pg_idx];
> +
> + res = snapshot_read_next(&data->handle);
> + if (res < 0)
> + return res;
> + if (res == 0)
> + break;
> +
> + WARN_ON(res != PAGE_SIZE);
> +
> + /*
> + * Copy the page into the staging area. A future optimization
> + * could potentially skip this copy for lowmem pages.
> + */
> + memcpy(buf, data_of(data->handle), PAGE_SIZE);
> + sg_set_buf(&data->sg[1 + pg_idx], buf, PAGE_SIZE);
> + total += PAGE_SIZE;
> + }
> +
> + sg_set_buf(&data->sg[1 + pg_idx], &data->auth_tag, SNAPSHOT_AUTH_TAG_SIZE);
> + aead_request_set_callback(req, 0, crypto_req_done, &wait);
> + /*
> + * Use incrementing nonces for each chunk, since a 64 bit value won't
> + * roll into re-use for any given hibernate image.
> + */
> + memcpy(&nonce[0], &data->nonce_low, sizeof(data->nonce_low));
> + memcpy(&nonce[sizeof(data->nonce_low)],
> + &data->nonce_high,
> + sizeof(nonce) - sizeof(data->nonce_low));
> +
> + data->nonce_low += 1;
> + /* Total does not include AAD or the auth tag. */
> + aead_request_set_crypt(req, data->sg, data->sg, total, nonce);
> + res = crypto_wait_req(crypto_aead_encrypt(req), &wait);
> + if (res)
> + return res;
> +
> + data->crypt_size = total;
> + data->crypt_total += total;
> + return 0;
> +}
> +
> +/* Decrypt data from the staging area and push it to the snapshot. */
> +static int snapshot_decrypt_drain(struct snapshot_data *data)
> +{
> + struct aead_request *req = data->aead_req;
> + u8 nonce[GCM_AES_IV_SIZE];
> + DECLARE_CRYPTO_WAIT(wait);
> + int page_count;
> + size_t total;
> + int pg_idx;
> + int res;
> +
> + /* Set up the associated data. */
> + sg_set_buf(&data->sg[0], &data->crypt_total, sizeof(data->crypt_total));
> +
> + /*
> + * Get the number of full pages, which could be short at the end. There
> + * should also be a tag at the end, so the offset won't be an even page.
> + */
> + page_count = data->crypt_offset >> PAGE_SHIFT;
> + total = page_count << PAGE_SHIFT;
> + if ((total == 0) || (total == data->crypt_offset))
> + return -EINVAL;
> +
> + /*
> + * Load the sg list with the crypt buffer. Inline decrypt back into the
> + * staging buffer. A future optimization could decrypt directly into
> + * lowmem pages.
> + */
> + for (pg_idx = 0; pg_idx < page_count; pg_idx++)
> + sg_set_buf(&data->sg[1 + pg_idx], data->crypt_pages[pg_idx], PAGE_SIZE);
> +
> + /*
> + * It's possible this is the final decrypt, and there are fewer than
> + * CHUNK_SIZE pages. If this is the case we would have just written the
> + * auth tag into the first few bytes of a new page. Copy to the tag if
> + * so.
> + */
> + if ((page_count < CHUNK_SIZE) &&
> + (data->crypt_offset - total) == sizeof(data->auth_tag)) {
> +
> + memcpy(data->auth_tag,
> + data->crypt_pages[pg_idx],
> + sizeof(data->auth_tag));
> +
> + } else if (data->crypt_offset !=
> + ((CHUNK_SIZE << PAGE_SHIFT) + SNAPSHOT_AUTH_TAG_SIZE)) {
> +
> + return -EINVAL;
> + }
> +
> + sg_set_buf(&data->sg[1 + pg_idx], &data->auth_tag, SNAPSHOT_AUTH_TAG_SIZE);
> + aead_request_set_callback(req, 0, crypto_req_done, &wait);
> + memcpy(&nonce[0], &data->nonce_low, sizeof(data->nonce_low));
> + memcpy(&nonce[sizeof(data->nonce_low)],
> + &data->nonce_high,
> + sizeof(nonce) - sizeof(data->nonce_low));
> +
> + data->nonce_low += 1;
> + aead_request_set_crypt(req, data->sg, data->sg, total + SNAPSHOT_AUTH_TAG_SIZE, nonce);
> + res = crypto_wait_req(crypto_aead_decrypt(req), &wait);
> + if (res)
> + return res;
> +
> + data->crypt_size = 0;
> + data->crypt_offset = 0;
> +
> + /* Push the decrypted pages further down the stack. */
> + total = 0;
> + for (pg_idx = 0; pg_idx < page_count; pg_idx++) {
> + void *buf = data->crypt_pages[pg_idx];
> +
> + res = snapshot_write_next(&data->handle);
> + if (res < 0)
> + return res;
> + if (res == 0)
> + break;
> +
> + if (!data_of(data->handle))
> + return -EINVAL;
> +
> + WARN_ON(res != PAGE_SIZE);
> +
> + /*
> + * Copy the page into the staging area. A future optimization
> + * could potentially skip this copy for lowmem pages.
> + */
> + memcpy(data_of(data->handle), buf, PAGE_SIZE);
> + total += PAGE_SIZE;
> + }
> +
> + data->crypt_total += total;
> + return 0;
> +}
> +
> +static ssize_t snapshot_read_next_encrypted(struct snapshot_data *data,
> + void **buf)
> +{
> + size_t tag_off;
> +
> + /* Refill the encrypted buffer if it's empty. */
> + if ((data->crypt_size == 0) ||
> + (data->crypt_offset >=
> + (data->crypt_size + SNAPSHOT_AUTH_TAG_SIZE))) {
> +
> + int rc;
> +
> + data->crypt_size = 0;
> + data->crypt_offset = 0;
> + rc = snapshot_encrypt_refill(data);
> + if (rc < 0)
> + return rc;
> + }
> +
> + /* Return data pages if the offset is in that region. */
> + if (data->crypt_offset < data->crypt_size) {
> + size_t pg_idx = data->crypt_offset >> PAGE_SHIFT;
> + size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1);
> + *buf = data->crypt_pages[pg_idx] + pg_off;
> + return PAGE_SIZE - pg_off;
> + }
> +
> + /* Use offsets just beyond the size to return the tag. */
> + tag_off = data->crypt_offset - data->crypt_size;
> + if (tag_off > SNAPSHOT_AUTH_TAG_SIZE)
> + tag_off = SNAPSHOT_AUTH_TAG_SIZE;
> +
> + *buf = data->auth_tag + tag_off;
> + return SNAPSHOT_AUTH_TAG_SIZE - tag_off;
> +}
> +
> +static ssize_t snapshot_write_next_encrypted(struct snapshot_data *data,
> + void **buf)
> +{
> + size_t tag_off;
> +
> + /* Return data pages if the offset is in that region. */
> + if (data->crypt_offset < (PAGE_SIZE * CHUNK_SIZE)) {
> + size_t pg_idx = data->crypt_offset >> PAGE_SHIFT;
> + size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1);
> + *buf = data->crypt_pages[pg_idx] + pg_off;
> + return PAGE_SIZE - pg_off;
> + }
> +
> + /* Use offsets just beyond the size to return the tag. */
> + tag_off = data->crypt_offset - (PAGE_SIZE * CHUNK_SIZE);
> + if (tag_off > SNAPSHOT_AUTH_TAG_SIZE)
> + tag_off = SNAPSHOT_AUTH_TAG_SIZE;
> +
> + *buf = data->auth_tag + tag_off;
> + return SNAPSHOT_AUTH_TAG_SIZE - tag_off;
> +}
> +
> +ssize_t snapshot_read_encrypted(struct snapshot_data *data,
> + char __user *buf, size_t count, loff_t *offp)
> +{
> + ssize_t total = 0;
> +
> + /* Loop getting buffers of varying sizes and copying to userspace. */
> + while (count) {
> + size_t copy_size;
> + size_t not_done;
> + void *src;
> + ssize_t src_size = snapshot_read_next_encrypted(data, &src);
> +
> + if (src_size <= 0) {
> + if (total == 0)
> + return src_size;
> +
> + break;
> + }
> +
> + copy_size = min(count, (size_t)src_size);
> + not_done = copy_to_user(buf + total, src, copy_size);
> + copy_size -= not_done;
> + total += copy_size;
> + count -= copy_size;
> + data->crypt_offset += copy_size;
> + if (copy_size == 0) {
> + if (total == 0)
> + return -EFAULT;
> +
> + break;
> + }
> + }
> +
> + *offp += total;
> + return total;
> +}
> +
> +ssize_t snapshot_write_encrypted(struct snapshot_data *data,
> + const char __user *buf, size_t count,
> + loff_t *offp)
> +{
> + ssize_t total = 0;
> +
> + /* Loop getting buffers of varying sizes and copying from. */
> + while (count) {
> + size_t copy_size;
> + size_t not_done;
> + void *dst;
> + ssize_t dst_size = snapshot_write_next_encrypted(data, &dst);
> +
> + if (dst_size <= 0) {
> + if (total == 0)
> + return dst_size;
> +
> + break;
> + }
> +
> + copy_size = min(count, (size_t)dst_size);
> + not_done = copy_from_user(dst, buf + total, copy_size);
> + copy_size -= not_done;
> + total += copy_size;
> + count -= copy_size;
> + data->crypt_offset += copy_size;
> + if (copy_size == 0) {
> + if (total == 0)
> + return -EFAULT;
> +
> + break;
> + }
> +
> + /* Drain the encrypted buffer if it's full. */
> + if ((data->crypt_offset >=
> + ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE))) {
> +
> + int rc;
> +
> + rc = snapshot_decrypt_drain(data);
> + if (rc < 0)
> + return rc;
> + }
> + }
> +
> + *offp += total;
> + return total;
> +}
> +
> +void snapshot_teardown_encryption(struct snapshot_data *data)
> +{
> + int i;
> +
> + if (data->aead_req) {
> + aead_request_free(data->aead_req);
> + data->aead_req = NULL;
> + }
> +
> + if (data->aead_tfm) {
> + crypto_free_aead(data->aead_tfm);
> + data->aead_tfm = NULL;
> + }
> +
> + for (i = 0; i < CHUNK_SIZE; i++) {
> + if (data->crypt_pages[i]) {
> + free_page((unsigned long)data->crypt_pages[i]);
> + data->crypt_pages[i] = NULL;
> + }
> + }
> +}
> +
> +static int snapshot_setup_encryption_common(struct snapshot_data *data)
> +{
> + int i, rc;
> +
> + data->crypt_total = 0;
> + data->crypt_offset = 0;
> + data->crypt_size = 0;
> + memset(data->crypt_pages, 0, sizeof(data->crypt_pages));
> + /* This only works once per hibernate. */
> + if (data->aead_tfm)
> + return -EINVAL;
> +
> + /* Set up the encryption transform */
> + data->aead_tfm = crypto_alloc_aead("gcm(aes)", 0, 0);
> + if (IS_ERR(data->aead_tfm)) {
> + rc = PTR_ERR(data->aead_tfm);
> + data->aead_tfm = NULL;
> + return rc;
> + }
> +
> + rc = -ENOMEM;
> + data->aead_req = aead_request_alloc(data->aead_tfm, GFP_KERNEL);
> + if (data->aead_req == NULL)
> + goto setup_fail;
> +
> + /* Allocate the staging area */
> + for (i = 0; i < CHUNK_SIZE; i++) {
> + data->crypt_pages[i] = (void *)__get_free_page(GFP_ATOMIC);
> + if (data->crypt_pages[i] == NULL)
> + goto setup_fail;
> + }
> +
> + sg_init_table(data->sg, CHUNK_SIZE + 2);
> +
> + /*
> + * The associated data will be the offset so that blocks can't be
> + * rearranged.
> + */
> + aead_request_set_ad(data->aead_req, sizeof(data->crypt_total));
> + rc = crypto_aead_setauthsize(data->aead_tfm, SNAPSHOT_AUTH_TAG_SIZE);
> + if (rc)
> + goto setup_fail;
> +
> + return 0;
> +
> +setup_fail:
> + snapshot_teardown_encryption(data);
> + return rc;
> +}
> +
> +int snapshot_get_encryption_key(struct snapshot_data *data,
> + struct uswsusp_key_blob __user *key)
> +{
> + u8 aead_key[SNAPSHOT_ENCRYPTION_KEY_SIZE];
> + u8 nonce[USWSUSP_KEY_NONCE_SIZE];
> + int rc;
> +
> + /* Don't pull a random key from a world that can be reset. */
> + if (data->ready)
> + return -EPIPE;
> +
> + rc = snapshot_setup_encryption_common(data);
> + if (rc)
> + return rc;
> +
> + /* Build a random starting nonce. */
> + get_random_bytes(nonce, sizeof(nonce));
> + memcpy(&data->nonce_low, &nonce[0], sizeof(data->nonce_low));
> + memcpy(&data->nonce_high, &nonce[8], sizeof(data->nonce_high));
> + /* Build a random key */
> + get_random_bytes(aead_key, sizeof(aead_key));
> + rc = crypto_aead_setkey(data->aead_tfm, aead_key, sizeof(aead_key));
> + if (rc)
> + goto fail;
> +
> + /* Hand the key back to user mode (to be changed!) */
> + rc = put_user(sizeof(struct uswsusp_key_blob), &key->blob_len);
> + if (rc)
> + goto fail;
> +
> + rc = copy_to_user(&key->blob, &aead_key, sizeof(aead_key));
> + if (rc)
> + goto fail;
> +
> + rc = copy_to_user(&key->nonce, &nonce, sizeof(nonce));
> + if (rc)
> + goto fail;
> +
> + return 0;
> +
> +fail:
> + snapshot_teardown_encryption(data);
> + return rc;
> +}
> +
> +int snapshot_set_encryption_key(struct snapshot_data *data,
> + struct uswsusp_key_blob __user *key)
> +{
> + struct uswsusp_key_blob blob;
> + int rc;
> +
> + /* It's too late if data's been pushed in. */
> + if (data->handle.cur)
> + return -EPIPE;
> +
> + rc = snapshot_setup_encryption_common(data);
> + if (rc)
> + return rc;
> +
> + /* Load the key from user mode. */
> + rc = copy_from_user(&blob, key, sizeof(struct uswsusp_key_blob));
> + if (rc)
> + goto crypto_setup_fail;
> +
> + if (blob.blob_len != sizeof(struct uswsusp_key_blob)) {
> + rc = -EINVAL;
> + goto crypto_setup_fail;
> + }
> +
> + rc = crypto_aead_setkey(data->aead_tfm,
> + blob.blob,
> + SNAPSHOT_ENCRYPTION_KEY_SIZE);
> +
> + if (rc)
> + goto crypto_setup_fail;
> +
> + /* Load the starting nonce. */
> + memcpy(&data->nonce_low, &blob.nonce[0], sizeof(data->nonce_low));
> + memcpy(&data->nonce_high, &blob.nonce[8], sizeof(data->nonce_high));
> + return 0;
> +
> +crypto_setup_fail:
> + snapshot_teardown_encryption(data);
> + return rc;
> +}
> +
> +loff_t snapshot_get_encrypted_image_size(loff_t raw_size)
> +{
> + loff_t pages = raw_size >> PAGE_SHIFT;
> + loff_t chunks = (pages + (CHUNK_SIZE - 1)) / CHUNK_SIZE;
> + /*
> + * The encrypted size is the normal size, plus a stitched in
> + * authentication tag for every chunk of pages.
> + */
> + return raw_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE);
> +}
> +
> +int snapshot_finalize_decrypted_image(struct snapshot_data *data)
> +{
> + int rc;
> +
> + if (data->crypt_offset != 0) {
> + rc = snapshot_decrypt_drain(data);
> + if (rc)
> + return rc;
> + }
> +
> + return 0;
> +}
> diff --git a/kernel/power/user.c b/kernel/power/user.c
> index 3a4e70366f354c..bba5cdbd2c0239 100644
> --- a/kernel/power/user.c
> +++ b/kernel/power/user.c
> @@ -25,19 +25,10 @@
> #include <linux/uaccess.h>
>
> #include "power.h"
> +#include "user.h"
>
> static bool need_wait;
> -
> -static struct snapshot_data {
> - struct snapshot_handle handle;
> - int swap;
> - int mode;
> - bool frozen;
> - bool ready;
> - bool platform_support;
> - bool free_bitmaps;
> - dev_t dev;
> -} snapshot_state;
> +struct snapshot_data snapshot_state;
>
> int is_hibernate_resume_dev(dev_t dev)
> {
> @@ -122,6 +113,7 @@ static int snapshot_release(struct inode *inode, struct file *filp)
> } else if (data->free_bitmaps) {
> free_basic_memory_bitmaps();
> }
> + snapshot_teardown_encryption(data);
> pm_notifier_call_chain(data->mode == O_RDONLY ?
> PM_POST_HIBERNATION : PM_POST_RESTORE);
> hibernate_release();
> @@ -146,6 +138,12 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf,
> res = -ENODATA;
> goto Unlock;
> }
> +
> + if (snapshot_encryption_enabled(data)) {
> + res = snapshot_read_encrypted(data, buf, count, offp);
> + goto Unlock;
> + }
> +
> if (!pg_offp) { /* on page boundary? */
> res = snapshot_read_next(&data->handle);
> if (res <= 0)
> @@ -182,6 +180,11 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf,
>
> data = filp->private_data;
>
> + if (snapshot_encryption_enabled(data)) {
> + res = snapshot_write_encrypted(data, buf, count, offp);
> + goto unlock;
> + }
> +
> if (!pg_offp) {
> res = snapshot_write_next(&data->handle);
> if (res <= 0)
> @@ -317,6 +320,12 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
> break;
>
> case SNAPSHOT_ATOMIC_RESTORE:
> + if (snapshot_encryption_enabled(data)) {
> + error = snapshot_finalize_decrypted_image(data);
> + if (error)
> + break;
> + }
> +
> snapshot_write_finalize(&data->handle);
> if (data->mode != O_WRONLY || !data->frozen ||
> !snapshot_image_loaded(&data->handle)) {
> @@ -352,6 +361,8 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
> }
> size = snapshot_get_image_size();
> size <<= PAGE_SHIFT;
> + if (snapshot_encryption_enabled(data))
> + size = snapshot_get_encrypted_image_size(size);
> error = put_user(size, (loff_t __user *)arg);
> break;
>
> @@ -409,6 +420,13 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
> error = snapshot_set_swap_area(data, (void __user *)arg);
> break;
>
> + case SNAPSHOT_ENABLE_ENCRYPTION:
> + if (data->mode == O_RDONLY)
> + error = snapshot_get_encryption_key(data, (void __user *)arg);
> + else
> + error = snapshot_set_encryption_key(data, (void __user *)arg);
> + break;
> +
> default:
> error = -ENOTTY;
>
> diff --git a/kernel/power/user.h b/kernel/power/user.h
> new file mode 100644
> index 00000000000000..ac429782abff85
> --- /dev/null
> +++ b/kernel/power/user.h
> @@ -0,0 +1,103 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <linux/crypto.h>
> +#include <crypto/aead.h>
> +#include <crypto/aes.h>
> +
> +#define SNAPSHOT_ENCRYPTION_KEY_SIZE AES_KEYSIZE_128
> +#define SNAPSHOT_AUTH_TAG_SIZE 16
> +
> +/* Define the number of pages in a single AEAD encryption chunk. */
> +#define CHUNK_SIZE 16
> +
> +struct snapshot_data {
> + struct snapshot_handle handle;
> + int swap;
> + int mode;
> + bool frozen;
> + bool ready;
> + bool platform_support;
> + bool free_bitmaps;
> + dev_t dev;
> +
> +#if defined(CONFIG_ENCRYPTED_HIBERNATION)
> + struct crypto_aead *aead_tfm;
> + struct aead_request *aead_req;
> + void *crypt_pages[CHUNK_SIZE];
> + u8 auth_tag[SNAPSHOT_AUTH_TAG_SIZE];
> + struct scatterlist sg[CHUNK_SIZE + 2]; /* Add room for AD and auth tag. */
> + size_t crypt_offset;
> + size_t crypt_size;
> + uint64_t crypt_total;
> + uint64_t nonce_low;
> + uint64_t nonce_high;
> +#endif
> +
> +};
> +
> +extern struct snapshot_data snapshot_state;
> +
> +/* kernel/power/swapenc.c routines */
> +#if defined(CONFIG_ENCRYPTED_HIBERNATION)
> +
> +ssize_t snapshot_read_encrypted(struct snapshot_data *data,
> + char __user *buf, size_t count, loff_t *offp);
> +
> +ssize_t snapshot_write_encrypted(struct snapshot_data *data,
> + const char __user *buf, size_t count,
> + loff_t *offp);
> +
> +void snapshot_teardown_encryption(struct snapshot_data *data);
> +int snapshot_get_encryption_key(struct snapshot_data *data,
> + struct uswsusp_key_blob __user *key);
> +
> +int snapshot_set_encryption_key(struct snapshot_data *data,
> + struct uswsusp_key_blob __user *key);
> +
> +loff_t snapshot_get_encrypted_image_size(loff_t raw_size);
> +
> +int snapshot_finalize_decrypted_image(struct snapshot_data *data);
> +
> +#define snapshot_encryption_enabled(data) (!!(data)->aead_tfm)
> +
> +#else
> +
> +ssize_t snapshot_read_encrypted(struct snapshot_data *data,
> + char __user *buf, size_t count, loff_t *offp)
> +{
> + return -ENOTTY;
> +}
> +
> +ssize_t snapshot_write_encrypted(struct snapshot_data *data,
> + const char __user *buf, size_t count,
> + loff_t *offp)
> +{
> + return -ENOTTY;
> +}
> +
> +static void snapshot_teardown_encryption(struct snapshot_data *data) {}
> +static int snapshot_get_encryption_key(struct snapshot_data *data,
> + struct uswsusp_key_blob __user *key)
> +{
> + return -ENOTTY;
> +}
> +
> +static int snapshot_set_encryption_key(struct snapshot_data *data,
> + struct uswsusp_key_blob __user *key)
> +{
> + return -ENOTTY;
> +}
> +
> +static loff_t snapshot_get_encrypted_image_size(loff_t raw_size)
> +{
> + return raw_size;
> +}
> +
> +static int snapshot_finalize_decrypted_image(struct snapshot_data *data)
> +{
> + return -ENOTTY;
> +}
> +
> +#define snapshot_encryption_enabled(data) (0)
> +
> +#endif
> --
> 2.38.1.431.g37b22c650d-goog
>

BR, Jarkko

2022-11-07 18:27:54

by Evan Green

[permalink] [raw]
Subject: Re: [PATCH v4 03/11] tpm: Allow PCR 23 to be restricted to kernel-only use

On Mon, Nov 7, 2022 at 3:40 AM Jarkko Sakkinen <[email protected]> wrote:
>
> On Thu, Nov 03, 2022 at 11:01:11AM -0700, Evan Green wrote:
> > From: Matthew Garrett <[email protected]>
> >
> > Introduce a new Kconfig, TCG_TPM_RESTRICT_PCR, which if enabled
> > restricts usermode's ability to extend or reset PCR 23.
> >
> > Under certain circumstances it might be desirable to enable the creation
> > of TPM-backed secrets that are only accessible to the kernel. In an
> > ideal world this could be achieved by using TPM localities, but these
> > don't appear to be available on consumer systems. An alternative is to
> > simply block userland from modifying one of the resettable PCRs, leaving
> > it available to the kernel. If the kernel ensures that no userland can
> > access the TPM while it is carrying out work, it can reset PCR 23,
> > extend it to an arbitrary value, create or load a secret, and then reset
> > the PCR again. Even if userland somehow obtains the sealed material, it
> > will be unable to unseal it since PCR 23 will never be in the
> > appropriate state.
> >
> > This Kconfig is only properly supported for systems with TPM2 devices.
> > For systems with TPM1 devices, having this Kconfig enabled completely
> > restricts usermode's access to the TPM. TPM1 contains support for
> > tunnelled transports, which usermode could use to smuggle commands
> > through that this Kconfig is attempting to restrict.
> >
> > Link: https://lore.kernel.org/lkml/[email protected]/
> > Signed-off-by: Matthew Garrett <[email protected]>
> > Signed-off-by: Evan Green <[email protected]>
> > ---
> >
> > Changes in v4:
> > - Augment the commit message (Jarkko)
> >
> > Changes in v3:
> > - Fix up commit message (Jarkko)
> > - tpm2_find_and_validate_cc() was split (Jarkko)
> > - Simply fully restrict TPM1 since v2 failed to account for tunnelled
> > transport sessions (Stefan and Jarkko).
> >
> > Changes in v2:
> > - Fixed sparse warnings
> >
> > drivers/char/tpm/Kconfig | 12 ++++++++++++
> > drivers/char/tpm/tpm-dev-common.c | 8 ++++++++
> > drivers/char/tpm/tpm.h | 19 +++++++++++++++++++
> > drivers/char/tpm/tpm1-cmd.c | 13 +++++++++++++
> > drivers/char/tpm/tpm2-cmd.c | 22 ++++++++++++++++++++++
> > 5 files changed, 74 insertions(+)
> >
> > diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
> > index 927088b2c3d3f2..c8ed54c66e399a 100644
> > --- a/drivers/char/tpm/Kconfig
> > +++ b/drivers/char/tpm/Kconfig
> > @@ -211,4 +211,16 @@ config TCG_FTPM_TEE
> > This driver proxies for firmware TPM running in TEE.
> >
> > source "drivers/char/tpm/st33zp24/Kconfig"
> > +
> > +config TCG_TPM_RESTRICT_PCR
> > + bool "Restrict userland access to PCR 23"
> > + depends on TCG_TPM
> > + help
> > + If set, block userland from extending or resetting PCR 23. This allows it
> > + to be restricted to in-kernel use, preventing userland from being able to
> > + make use of data sealed to the TPM by the kernel. This is required for
> > + secure hibernation support, but should be left disabled if any userland
> > + may require access to PCR23. This is a TPM2-only feature, and if enabled
> > + on a TPM1 machine will cause all usermode TPM commands to return EPERM due
> > + to the complications introduced by tunnelled sessions in TPM1.2.
> > endif # TCG_TPM
> > diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
> > index dc4c0a0a512903..7a4e618c7d1942 100644
> > --- a/drivers/char/tpm/tpm-dev-common.c
> > +++ b/drivers/char/tpm/tpm-dev-common.c
> > @@ -198,6 +198,14 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf,
> > priv->response_read = false;
> > *off = 0;
> >
> > + if (priv->chip->flags & TPM_CHIP_FLAG_TPM2)
> > + ret = tpm2_cmd_restricted(priv->chip, priv->data_buffer, size);
> > + else
> > + ret = tpm1_cmd_restricted(priv->chip, priv->data_buffer, size);
> > +
> > + if (ret)
> > + goto out;
> > +
> > /*
> > * If in nonblocking mode schedule an async job to send
> > * the command return the size.
> > diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
> > index f1e0f490176f01..c0845e3f9eda17 100644
> > --- a/drivers/char/tpm/tpm.h
> > +++ b/drivers/char/tpm/tpm.h
> > @@ -245,4 +245,23 @@ void tpm_bios_log_setup(struct tpm_chip *chip);
> > void tpm_bios_log_teardown(struct tpm_chip *chip);
> > int tpm_dev_common_init(void);
> > void tpm_dev_common_exit(void);
> > +
> > +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> > +#define TPM_RESTRICTED_PCR 23
> > +
> > +int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size);
> > +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size);
> > +#else
> > +static inline int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer,
> > + size_t size)
> > +{
> > + return 0;
> > +}
> > +
> > +static inline int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer,
> > + size_t size)
> > +{
> > + return 0;
> > +}
> > +#endif
> > #endif
> > diff --git a/drivers/char/tpm/tpm1-cmd.c b/drivers/char/tpm/tpm1-cmd.c
> > index cf64c738510529..1869e89215fcb9 100644
> > --- a/drivers/char/tpm/tpm1-cmd.c
> > +++ b/drivers/char/tpm/tpm1-cmd.c
> > @@ -811,3 +811,16 @@ int tpm1_get_pcr_allocation(struct tpm_chip *chip)
> >
> > return 0;
> > }
> > +
> > +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> > +int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size)
> > +{
> > + /*
> > + * Restrict all usermode commands on TPM1.2. Ideally we'd just restrict
> > + * TPM_ORD_PCR_EXTEND and TPM_ORD_PCR_RESET, but TPM1.2 also supports
> > + * tunnelled transport sessions where the kernel would be unable to filter
> > + * commands.
> > + */
> > + return -EPERM;
> > +}
> > +#endif
> > diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
> > index 303ce2ea02a4b0..e0503cfd7bcfee 100644
> > --- a/drivers/char/tpm/tpm2-cmd.c
> > +++ b/drivers/char/tpm/tpm2-cmd.c
> > @@ -778,3 +778,25 @@ int tpm2_find_cc(struct tpm_chip *chip, u32 cc)
> >
> > return -1;
> > }
> > +
> > +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> > +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size)
> > +{
> > + int cc = tpm2_find_and_validate_cc(chip, NULL, buffer, size);
> > + __be32 *handle;
> > +
> > + switch (cc) {
> > + case TPM2_CC_PCR_EXTEND:
> > + case TPM2_CC_PCR_RESET:
> > + if (size < (TPM_HEADER_SIZE + sizeof(u32)))
> > + return -EINVAL;
> > +
> > + handle = (__be32 *)&buffer[TPM_HEADER_SIZE];
> > + if (be32_to_cpu(*handle) == TPM_RESTRICTED_PCR)
> > + return -EPERM;
> > + break;
> > + }
> > +
> > + return 0;
> > +}
> > +#endif
> > --
> > 2.38.1.431.g37b22c650d-goog
> >
>
> This looks otherwise good but I have still one remark: what is the reason
> for restricting PCR23 for TPM 1.x?

Mostly I was trying to do the least surprising thing for someone who
had compiled with this RESTRICT_PCR Kconfig enabled but booted a TPM1
system. If we do nothing for TPM1, then the encrypted hibernation
mechanism appears to work fine, but leaves a gaping hole where
usermode can manipulate PCR23 themselves to create forged encrypted
hibernate images. Denying all usermode access makes the Kconfig
correct on TPM1 systems, at the expense of all usermode access (rather
than just access to PCR23).

An alternative that might be friendlier to users would be to do a
runtime check in the encrypted hibernate code to simply fail if this
isn't TPM2. The tradeoff there is that it waters down the Kconfig
significantly to "RESTRICT_PCR sometimes, if you can, otherwise meh".
That seemed a bit dangerous, as any future features that may want to
rely on this Kconfig would have to remember to restrict their support
to TPM2 as well.

-Evan

>
> BR, Jarkko
>

2022-11-10 00:45:43

by Evan Green

[permalink] [raw]
Subject: Re: [PATCH v4 09/11] PM: hibernate: Mix user key in encrypted hibernate

On Fri, Nov 4, 2022 at 11:54 AM Kees Cook <[email protected]> wrote:
>
> On Thu, Nov 03, 2022 at 11:01:17AM -0700, Evan Green wrote:
> > Usermode may have their own data protection requirements when it comes
> > to encrypting the hibernate image. For example, users may want a policy
> > where the hibernate image is protected by a key derived both from
> > platform-level security as well as authentication data (such as a
> > password or PIN). This way, even if the platform is compromised (ie a
> > stolen laptop), sensitive data cannot be exfiltrated via the hibernate
> > image without additional data (like the user's password).
> >
> > The kernel is already doing the encryption, but will be protecting its
> > key with the TPM alone. Allow usermode to mix in key content of their own
> > for the data portion of the hibernate image, so that the image
> > encryption key is determined both by a TPM-backed secret and
> > user-defined data.
> >
> > To mix the user key in, we hash the kernel key followed by the user key,
> > and use the resulting hash as the new key. This allows usermode to mix
> > in its key material without giving it too much control over what key is
> > actually driving the encryption (which might be used to attack the
> > secret kernel key).
> >
> > Limiting this to the data portion allows the kernel to receive the page
> > map and prepare its giant allocation even if this user key is not yet
> > available (ie the user has not yet finished typing in their password).
> > Once the user key becomes available, the data portion can be pushed
> > through to the kernel as well. This enables "preloading" scenarios,
> > where the hibernate image is loaded off of disk while the additional
> > key material (eg password) is being collected.
> >
> > One annoyance of the "preloading" scheme is that hibernate image memory
> > is effectively double-allocated: first by the usermode process pulling
> > encrypted contents off of disk and holding it, and second by the kernel
> > in its giant allocation in prepare_image(). An interesting future
> > optimization would be to allow the kernel to accept and store encrypted
> > page data before the user key is available. This would remove the
> > double allocation problem, as usermode could push the encrypted pages
> > loaded from disk immediately without storing them. The kernel could defer
> > decryption of the data until the user key is available, while still
> > knowing the correct page locations to store the encrypted data in.
> >
> > Signed-off-by: Evan Green <[email protected]>
> > ---
> >
> > (no changes since v2)
> >
> > Changes in v2:
> > - Add missing static on snapshot_encrypted_byte_count()
> > - Fold in only the used kernel key bytes to the user key.
> > - Make the user key length 32 (Eric)
> > - Use CRYPTO_LIB_SHA256 for less boilerplate (Eric)
> >
> > include/uapi/linux/suspend_ioctls.h | 15 ++-
> > kernel/power/Kconfig | 1 +
> > kernel/power/power.h | 1 +
> > kernel/power/snapenc.c | 158 ++++++++++++++++++++++++++--
> > kernel/power/snapshot.c | 5 +
> > kernel/power/user.c | 4 +
> > kernel/power/user.h | 12 +++
> > 7 files changed, 185 insertions(+), 11 deletions(-)
> >
> > diff --git a/include/uapi/linux/suspend_ioctls.h b/include/uapi/linux/suspend_ioctls.h
> > index b73026ef824bb9..f93a22eac52dc2 100644
> > --- a/include/uapi/linux/suspend_ioctls.h
> > +++ b/include/uapi/linux/suspend_ioctls.h
> > @@ -25,6 +25,18 @@ struct uswsusp_key_blob {
> > __u8 nonce[USWSUSP_KEY_NONCE_SIZE];
> > } __attribute__((packed));
> >
> > +/*
> > + * Allow user mode to fold in key material for the data portion of the hibernate
> > + * image.
> > + */
> > +struct uswsusp_user_key {
> > + /* Kernel returns the metadata size. */
> > + __kernel_loff_t meta_size;
> > + __u32 key_len;
> > + __u8 key[32];
>
> Why is this 32? (Is there a non-literal we can put here?)

Sure, I can make a new define for this: USWSUSP_USER_KEY_SIZE. Really
it just needs to be enough key material that usermode feels like
they've swizzled things up enough. I wanted to avoid using a
particular implementation constant like AES_KEYSIZE_256 because I
wanted that to be a kernel implementation detail, and also wanted to
avoid adding additional header dependencies to suspend_ioctls.h.

>
> > + __u32 pad;
>
> And why the pad?

I added the padding because I was finding myself struggling with what
I think are compiler differences when the structure size isn't a
multiple of its required alignment (which is 8 due to the
__kernel_loff_t). My usermode bindings in Rust were generating the
wrong ioctl numbers because it computed a different structure size.
Adding the padding removes the opportunity for misinterpretation.

>
> > +};
> > +
> > #define SNAPSHOT_IOC_MAGIC '3'
> > #define SNAPSHOT_FREEZE _IO(SNAPSHOT_IOC_MAGIC, 1)
> > #define SNAPSHOT_UNFREEZE _IO(SNAPSHOT_IOC_MAGIC, 2)
> > @@ -42,6 +54,7 @@ struct uswsusp_key_blob {
> > #define SNAPSHOT_AVAIL_SWAP_SIZE _IOR(SNAPSHOT_IOC_MAGIC, 19, __kernel_loff_t)
> > #define SNAPSHOT_ALLOC_SWAP_PAGE _IOR(SNAPSHOT_IOC_MAGIC, 20, __kernel_loff_t)
> > #define SNAPSHOT_ENABLE_ENCRYPTION _IOWR(SNAPSHOT_IOC_MAGIC, 21, struct uswsusp_key_blob)
> > -#define SNAPSHOT_IOC_MAXNR 21
> > +#define SNAPSHOT_SET_USER_KEY _IOWR(SNAPSHOT_IOC_MAGIC, 22, struct uswsusp_user_key)
> > +#define SNAPSHOT_IOC_MAXNR 22
> >
> > #endif /* _LINUX_SUSPEND_IOCTLS_H */
> > diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
> > index 2f8acbd87b34dc..35bf48b925ebf6 100644
> > --- a/kernel/power/Kconfig
> > +++ b/kernel/power/Kconfig
> > @@ -97,6 +97,7 @@ config ENCRYPTED_HIBERNATION
> > depends on HIBERNATION_SNAPSHOT_DEV
> > depends on CRYPTO_AEAD2=y
> > depends on TRUSTED_KEYS=y
> > + select CRYPTO_LIB_SHA256
> > default n
> > help
> > Enable support for kernel-based encryption of hibernation snapshots
> > diff --git a/kernel/power/power.h b/kernel/power/power.h
> > index b4f43394320961..5955e5cf692302 100644
> > --- a/kernel/power/power.h
> > +++ b/kernel/power/power.h
> > @@ -151,6 +151,7 @@ struct snapshot_handle {
> >
> > extern unsigned int snapshot_additional_pages(struct zone *zone);
> > extern unsigned long snapshot_get_image_size(void);
> > +extern unsigned long snapshot_get_meta_page_count(void);
> > extern int snapshot_read_next(struct snapshot_handle *handle);
> > extern int snapshot_write_next(struct snapshot_handle *handle);
> > extern void snapshot_write_finalize(struct snapshot_handle *handle);
> > diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c
> > index 7ff4fc66f7500c..50167a37c5bf23 100644
> > --- a/kernel/power/snapenc.c
> > +++ b/kernel/power/snapenc.c
> > @@ -6,6 +6,7 @@
> > #include <crypto/gcm.h>
> > #include <keys/trusted-type.h>
> > #include <linux/key-type.h>
> > +#include <crypto/sha.h>
> > #include <linux/random.h>
> > #include <linux/mm.h>
> > #include <linux/tpm.h>
> > @@ -21,6 +22,38 @@ static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256,
> > 0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05,
> > 0x5f, 0x49}};
> >
> > +/* Derive a key from the kernel and user keys for data encryption. */
> > +static int snapshot_use_user_key(struct snapshot_data *data)
> > +{
> > + u8 digest[SHA256_DIGEST_SIZE];
> > + struct trusted_key_payload *payload = data->key->payload.data[0];
> > + struct sha256_state sha256_state;
> > +
> > + /*
> > + * Hash the kernel key and the user key together. This folds in the user
> > + * key, but not in a way that gives the user mode predictable control
> > + * over the key bits.
> > + */
> > + sha256_init(&sha256_state);
> > + sha256_update(&sha256_state, payload->key, SNAPSHOT_ENCRYPTION_KEY_SIZE);
> > + sha256_update(&sha256_state, data->user_key, sizeof(data->user_key));
> > + sha256_final(&sha256_state, digest);
> > + return crypto_aead_setkey(data->aead_tfm,
> > + digest,
> > + SNAPSHOT_ENCRYPTION_KEY_SIZE);
> > +}
> > +
> > +/* Check to see if it's time to switch to the user key, and do it if so. */
> > +static int snapshot_check_user_key_switch(struct snapshot_data *data)
> > +{
> > + if (data->user_key_valid && data->meta_size &&
> > + data->crypt_total == data->meta_size) {
> > + return snapshot_use_user_key(data);
> > + }
> > +
> > + return 0;
> > +}
> > +
> > /* Encrypt more data from the snapshot into the staging area. */
> > static int snapshot_encrypt_refill(struct snapshot_data *data)
> > {
> > @@ -32,6 +65,15 @@ static int snapshot_encrypt_refill(struct snapshot_data *data)
> > int pg_idx;
> > int res;
> >
> > + if (data->crypt_total == 0) {
> > + data->meta_size = snapshot_get_meta_page_count() << PAGE_SHIFT;
> > +
> > + } else {
> > + res = snapshot_check_user_key_switch(data);
> > + if (res)
> > + return res;
> > + }
> > +
> > /*
> > * The first buffer is the associated data, set to the offset to prevent
> > * attacks that rearrange chunks.
> > @@ -42,6 +84,11 @@ static int snapshot_encrypt_refill(struct snapshot_data *data)
> > for (pg_idx = 0; pg_idx < CHUNK_SIZE; pg_idx++) {
> > void *buf = data->crypt_pages[pg_idx];
> >
> > + /* Stop at the meta page boundary to potentially switch keys. */
> > + if (total &&
> > + ((data->crypt_total + total) == data->meta_size))
> > + break;
> > +
> > res = snapshot_read_next(&data->handle);
> > if (res < 0)
> > return res;
> > @@ -114,10 +161,10 @@ static int snapshot_decrypt_drain(struct snapshot_data *data)
> > sg_set_buf(&data->sg[1 + pg_idx], data->crypt_pages[pg_idx], PAGE_SIZE);
> >
> > /*
> > - * It's possible this is the final decrypt, and there are fewer than
> > - * CHUNK_SIZE pages. If this is the case we would have just written the
> > - * auth tag into the first few bytes of a new page. Copy to the tag if
> > - * so.
> > + * It's possible this is the final decrypt, or the final decrypt of the
> > + * meta region, and there are fewer than CHUNK_SIZE pages. If this is
> > + * the case we would have just written the auth tag into the first few
> > + * bytes of a new page. Copy to the tag if so.
> > */
> > if ((page_count < CHUNK_SIZE) &&
> > (data->crypt_offset - total) == sizeof(data->auth_tag)) {
> > @@ -172,7 +219,14 @@ static int snapshot_decrypt_drain(struct snapshot_data *data)
> > total += PAGE_SIZE;
> > }
> >
> > + if (data->crypt_total == 0)
> > + data->meta_size = snapshot_get_meta_page_count() << PAGE_SHIFT;
> > +
> > data->crypt_total += total;
> > + res = snapshot_check_user_key_switch(data);
> > + if (res)
> > + return res;
> > +
> > return 0;
> > }
> >
> > @@ -221,8 +275,26 @@ static ssize_t snapshot_write_next_encrypted(struct snapshot_data *data,
> > if (data->crypt_offset < (PAGE_SIZE * CHUNK_SIZE)) {
> > size_t pg_idx = data->crypt_offset >> PAGE_SHIFT;
> > size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1);
> > + size_t size_avail = PAGE_SIZE;
> > *buf = data->crypt_pages[pg_idx] + pg_off;
> > - return PAGE_SIZE - pg_off;
> > +
> > + /*
> > + * If this is the boundary where the meta pages end, then just
> > + * return enough for the auth tag.
> > + */
> > + if (data->meta_size && (data->crypt_total < data->meta_size)) {
> > + uint64_t total_done =
> > + data->crypt_total + data->crypt_offset;
> > +
> > + if ((total_done >= data->meta_size) &&
> > + (total_done <
> > + (data->meta_size + SNAPSHOT_AUTH_TAG_SIZE))) {
> > +
> > + size_avail = SNAPSHOT_AUTH_TAG_SIZE;
> > + }
> > + }
> > +
> > + return size_avail - pg_off;
> > }
> >
> > /* Use offsets just beyond the size to return the tag. */
> > @@ -304,9 +376,15 @@ ssize_t snapshot_write_encrypted(struct snapshot_data *data,
> > break;
> > }
> >
> > - /* Drain the encrypted buffer if it's full. */
> > + /*
> > + * Drain the encrypted buffer if it's full, or if we hit the end
> > + * of the meta pages and need a key change.
> > + */
> > if ((data->crypt_offset >=
> > - ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE))) {
> > + ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE)) ||
> > + (data->meta_size && (data->crypt_total < data->meta_size) &&
> > + ((data->crypt_total + data->crypt_offset) ==
> > + (data->meta_size + SNAPSHOT_AUTH_TAG_SIZE)))) {
> >
> > int rc;
> >
> > @@ -350,6 +428,8 @@ void snapshot_teardown_encryption(struct snapshot_data *data)
> > data->crypt_pages[i] = NULL;
> > }
> > }
> > +
> > + memset(data->user_key, 0, sizeof(data->user_key));
> > }
> >
> > static int snapshot_setup_encryption_common(struct snapshot_data *data)
> > @@ -359,6 +439,7 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data)
> > data->crypt_total = 0;
> > data->crypt_offset = 0;
> > data->crypt_size = 0;
> > + data->user_key_valid = false;
> > memset(data->crypt_pages, 0, sizeof(data->crypt_pages));
> > /* This only works once per hibernate. */
> > if (data->aead_tfm)
> > @@ -661,15 +742,72 @@ int snapshot_set_encryption_key(struct snapshot_data *data,
> > return rc;
> > }
> >
> > -loff_t snapshot_get_encrypted_image_size(loff_t raw_size)
> > +static loff_t snapshot_encrypted_byte_count(loff_t plain_size)
> > {
> > - loff_t pages = raw_size >> PAGE_SHIFT;
> > + loff_t pages = plain_size >> PAGE_SHIFT;
> > loff_t chunks = (pages + (CHUNK_SIZE - 1)) / CHUNK_SIZE;
> > /*
> > * The encrypted size is the normal size, plus a stitched in
> > * authentication tag for every chunk of pages.
> > */
> > - return raw_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE);
> > + return plain_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE);
> > +}
> > +
> > +static loff_t snapshot_get_meta_data_size(void)
> > +{
> > + loff_t pages = snapshot_get_meta_page_count();
> > +
> > + return snapshot_encrypted_byte_count(pages << PAGE_SHIFT);
> > +}
> > +
> > +int snapshot_set_user_key(struct snapshot_data *data,
> > + struct uswsusp_user_key __user *key)
> > +{
> > + struct uswsusp_user_key user_key;
> > + unsigned int key_len;
> > + int rc;
> > + loff_t size;
> > +
> > + /*
> > + * Return the metadata size, the number of bytes that can be fed in before
> > + * the user data key is needed at resume time.
> > + */
> > + size = snapshot_get_meta_data_size();
> > + rc = put_user(size, &key->meta_size);
> > + if (rc)
> > + return rc;
> > +
> > + rc = copy_from_user(&user_key, key, sizeof(struct uswsusp_user_key));
> > + if (rc)
> > + return rc;
> > +
> > + key_len = min_t(__u32, user_key.key_len, sizeof(data->user_key));
> > + if (key_len < 8)
> > + return -EINVAL;
> > +
> > + /* Don't allow it if it's too late. */
> > + if (data->crypt_total > data->meta_size)
> > + return -EBUSY;
> > +
> > + memset(data->user_key, 0, sizeof(data->user_key));
> > + memcpy(data->user_key, user_key.key, key_len);
>
> Is struct snapshot_data::user_key is supposed to be %NUL terminated? Or
> is it just 0-padded up to 32 bytes? If the latter, it might be worth
> marking struct snapshot_data::user_data with the __non_string attribute.

It's just zero padded up to 32 bytes, and is stored here until it's
ready to be folded in by snapshot_use_user_key(). I'll add the
attribute as well.

>
> I don't like the dissociation of struct uswsusp_user_key::user_key and
> struct snapshot_data::user_key, since a mistake here can lead to copying
> kernel memory into struct snapshot_data::user_key. It would be nice to
> see something like:
>
> BUILD_BUG_ON(sizeof(data->user_key) < sizeof(user_key.key));

Ok, now that I've got a define for the size in suspend_ioctls.h, I'll
use that in snapshot_data.user_key as well. I'll also add the
BUILD_BUG_ON here, and for a couple of other compile-time size
requirements in snapshot_use_user_key().




>
> --
> Kees Cook

2022-11-10 00:46:22

by Evan Green

[permalink] [raw]
Subject: Re: [PATCH v4 07/11] PM: hibernate: Add kernel-based encryption

On Fri, Nov 4, 2022 at 11:38 AM Kees Cook <[email protected]> wrote:
>
> On Thu, Nov 03, 2022 at 11:01:15AM -0700, Evan Green wrote:
> > [...]
> > +config ENCRYPTED_HIBERNATION
> > + bool "Encryption support for userspace snapshots"
> > + depends on HIBERNATION_SNAPSHOT_DEV
> > + depends on CRYPTO_AEAD2=y
> > + default n
>
> "default n" is the, err, default, so this line can be left out.
>
> If someone more familiar with the crypto pieces can review the rest,
> that would be good. :)

Eric and I emailed briefly about it a couple weeks ago, he said he
would try to take a look when he could. I'm optimistic.

-Evan

>
> --
> Kees Cook

2022-11-10 01:01:55

by Evan Green

[permalink] [raw]
Subject: Re: [PATCH v4 10/11] PM: hibernate: Verify the digest encryption key

On Fri, Nov 4, 2022 at 12:00 PM Kees Cook <[email protected]> wrote:
>
> On Thu, Nov 03, 2022 at 11:01:18AM -0700, Evan Green wrote:
> > We want to ensure that the key used to encrypt the digest was created by
> > the kernel during hibernation. To do this we request that the TPM
> > include information about the value of PCR 23 at the time of key
> > creation in the sealed blob. On resume, we can make sure that the PCR
> > information in the creation data blob (already certified by the TPM to
> > be accurate) corresponds to the expected value. Since only
> > the kernel can touch PCR 23, if an attacker generates a key themselves
> > the value of PCR 23 will have been different, allowing us to reject the
> > key and boot normally instead of resuming.
> >
> > Co-developed-by: Matthew Garrett <[email protected]>
> > Signed-off-by: Matthew Garrett <[email protected]>
> > Signed-off-by: Evan Green <[email protected]>
> >
> > ---
> > Matthew's original version of this patch is here:
> > https://patchwork.kernel.org/project/linux-pm/patch/[email protected]/
> >
> > I moved the TPM2_CC_CERTIFYCREATION code into a separate change in the
> > trusted key code because the blob_handle was being flushed and was no
> > longer valid for use in CC_CERTIFYCREATION after the key was loaded. As
> > an added benefit of moving the certification into the trusted keys code,
> > we can drop the other patch from the original series that squirrelled
> > the blob_handle away.
> >
> > Changes in v4:
> > - Local variable reordering (Jarkko)
> >
> > Changes in v3:
> > - Changed funky tag to Co-developed-by (Kees). Matthew, holler if you
> > want something different.
> >
> > Changes in v2:
> > - Fixed some sparse warnings
> > - Use CRYPTO_LIB_SHA256 to get rid of sha256_data() (Eric)
> > - Adjusted offsets due to new ASN.1 format, and added a creation data
> > length check.
> >
> > kernel/power/snapenc.c | 67 ++++++++++++++++++++++++++++++++++++++++--
> > 1 file changed, 65 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c
> > index 50167a37c5bf23..2f421061498246 100644
> > --- a/kernel/power/snapenc.c
> > +++ b/kernel/power/snapenc.c
> > @@ -22,6 +22,12 @@ static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256,
> > 0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05,
> > 0x5f, 0x49}};
> >
> > +/* sha256(sha256(empty_pcr | known_digest)) */
> > +static const char expected_digest[] = {0x2f, 0x96, 0xf2, 0x1b, 0x70, 0xa9, 0xe8,
> > + 0x42, 0x25, 0x8e, 0x66, 0x07, 0xbe, 0xbc, 0xe3, 0x1f, 0x2c, 0x84, 0x4a,
> > + 0x3f, 0x85, 0x17, 0x31, 0x47, 0x9a, 0xa5, 0x53, 0xbb, 0x23, 0x0c, 0x32,
> > + 0xf3};
> > +
> > /* Derive a key from the kernel and user keys for data encryption. */
> > static int snapshot_use_user_key(struct snapshot_data *data)
> > {
> > @@ -486,7 +492,7 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data)
> > static int snapshot_create_kernel_key(struct snapshot_data *data)
> > {
> > /* Create a key sealed by the SRK. */
> > - char *keyinfo = "new\t32\tkeyhandle=0x81000000";
> > + char *keyinfo = "new\t32\tkeyhandle=0x81000000\tcreationpcrs=0x00800000";
> > const struct cred *cred = current_cred();
> > struct tpm_digest *digests = NULL;
> > struct key *key = NULL;
> > @@ -613,6 +619,8 @@ static int snapshot_load_kernel_key(struct snapshot_data *data,
> >
> > char *keytemplate = "load\t%s\tkeyhandle=0x81000000";
> > const struct cred *cred = current_cred();
> > + struct trusted_key_payload *payload;
> > + char certhash[SHA256_DIGEST_SIZE];
> > struct tpm_digest *digests = NULL;
> > char *blobstring = NULL;
> > struct key *key = NULL;
> > @@ -635,8 +643,10 @@ static int snapshot_load_kernel_key(struct snapshot_data *data,
> >
> > digests = kcalloc(chip->nr_allocated_banks, sizeof(struct tpm_digest),
> > GFP_KERNEL);
> > - if (!digests)
> > + if (!digests) {
> > + ret = -ENOMEM;
> > goto out;
> > + }
> >
> > for (i = 0; i < chip->nr_allocated_banks; i++) {
> > digests[i].alg_id = chip->allocated_banks[i].alg_id;
> > @@ -676,6 +686,59 @@ static int snapshot_load_kernel_key(struct snapshot_data *data,
> > if (ret != 0)
> > goto out;
> >
> > + /* Verify the creation hash matches the creation data. */
> > + payload = key->payload.data[0];
> > + if (!payload->creation || !payload->creation_hash ||
> > + (payload->creation_len < 3) ||
>
> Later accesses are reaching into indexes, 6, 8, 12, 14, etc. Shouldn't
> this test be:
>
> (payload->creation_len < 14 + SHA256_DIGEST_SIZE) ||
>
Yikes, you're right.

>
> > + (payload->creation_hash_len < SHA256_DIGEST_SIZE)) {
> > + ret = -EINVAL;
> > + goto out;
> > + }
> > +
> > + sha256(payload->creation + 2, payload->creation_len - 2, certhash);
>
> Why +2 offset?

The first two bytes are a __be16 size that isn't part of what the TPM hashes.

>
> > + if (memcmp(payload->creation_hash + 2, certhash, SHA256_DIGEST_SIZE) != 0) {
>
> And if this is +2 also, shouldn't the earlier test be:
>
> (payload->creation_hash_len - 2 != SHA256_DIGEST_SIZE)) {

Oops, yes.

>
> ?
>
> > + if (be32_to_cpu(*(__be32 *)&payload->creation[2]) != 1) {
> > + ret = -EINVAL;
> > + goto out;
> > + }
> > +
> > + if (be16_to_cpu(*(__be16 *)&payload->creation[6]) != TPM_ALG_SHA256) {
> > + ret = -EINVAL;
> > + goto out;
> > + }
> > +
> > + if (*(char *)&payload->creation[8] != 3) {
> > + ret = -EINVAL;
> > + goto out;
> > + }
> > +
> > + /* PCR 23 selected */
> > + if (be32_to_cpu(*(__be32 *)&payload->creation[8]) != 0x03000080) {
> > + ret = -EINVAL;
> > + goto out;
> > + }
> > +
> > + if (be16_to_cpu(*(__be16 *)&payload->creation[12]) !=
> > + SHA256_DIGEST_SIZE) {
> > + ret = -EINVAL;
> > + goto out;
> > + }
> > +
> > + /* Verify PCR 23 contained the expected value when the key was created. */
> > + if (memcmp(&payload->creation[14], expected_digest,
> > + SHA256_DIGEST_SIZE) != 0) {
>
> These various literals (2, 6, 8, 3, 8, 0x03000080, 12, 14) should be
> explicit #defines so their purpose/meaning is more clear.
>
> I can guess at it, but better to avoid the guessing. :)

Ok, agreed it's a bit too hairy to manage this way. I can define a
struct specific to this form of the response I'm expecting, then use
struct fields like a proper C developer.




>
> > +
> > + ret = -EINVAL;
> > + goto out;
> > + }
> > +
> > data->key = key;
> > key = NULL;
> >
> > --
> > 2.38.1.431.g37b22c650d-goog
> >
>
> --
> Kees Cook

2022-11-10 17:11:15

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v4 09/11] PM: hibernate: Mix user key in encrypted hibernate

On Wed, Nov 09, 2022 at 04:30:10PM -0800, Evan Green wrote:
> On Fri, Nov 4, 2022 at 11:54 AM Kees Cook <[email protected]> wrote:
> >
> > On Thu, Nov 03, 2022 at 11:01:17AM -0700, Evan Green wrote:
> > > Usermode may have their own data protection requirements when it comes
> > > to encrypting the hibernate image. For example, users may want a policy
> > > where the hibernate image is protected by a key derived both from
> > > platform-level security as well as authentication data (such as a
> > > password or PIN). This way, even if the platform is compromised (ie a
> > > stolen laptop), sensitive data cannot be exfiltrated via the hibernate
> > > image without additional data (like the user's password).
> > >
> > > The kernel is already doing the encryption, but will be protecting its
> > > key with the TPM alone. Allow usermode to mix in key content of their own
> > > for the data portion of the hibernate image, so that the image
> > > encryption key is determined both by a TPM-backed secret and
> > > user-defined data.
> > >
> > > To mix the user key in, we hash the kernel key followed by the user key,
> > > and use the resulting hash as the new key. This allows usermode to mix
> > > in its key material without giving it too much control over what key is
> > > actually driving the encryption (which might be used to attack the
> > > secret kernel key).
> > >
> > > Limiting this to the data portion allows the kernel to receive the page
> > > map and prepare its giant allocation even if this user key is not yet
> > > available (ie the user has not yet finished typing in their password).
> > > Once the user key becomes available, the data portion can be pushed
> > > through to the kernel as well. This enables "preloading" scenarios,
> > > where the hibernate image is loaded off of disk while the additional
> > > key material (eg password) is being collected.
> > >
> > > One annoyance of the "preloading" scheme is that hibernate image memory
> > > is effectively double-allocated: first by the usermode process pulling
> > > encrypted contents off of disk and holding it, and second by the kernel
> > > in its giant allocation in prepare_image(). An interesting future
> > > optimization would be to allow the kernel to accept and store encrypted
> > > page data before the user key is available. This would remove the
> > > double allocation problem, as usermode could push the encrypted pages
> > > loaded from disk immediately without storing them. The kernel could defer
> > > decryption of the data until the user key is available, while still
> > > knowing the correct page locations to store the encrypted data in.
> > >
> > > Signed-off-by: Evan Green <[email protected]>
> > > ---
> > >
> > > (no changes since v2)
> > >
> > > Changes in v2:
> > > - Add missing static on snapshot_encrypted_byte_count()
> > > - Fold in only the used kernel key bytes to the user key.
> > > - Make the user key length 32 (Eric)
> > > - Use CRYPTO_LIB_SHA256 for less boilerplate (Eric)
> > >
> > > include/uapi/linux/suspend_ioctls.h | 15 ++-
> > > kernel/power/Kconfig | 1 +
> > > kernel/power/power.h | 1 +
> > > kernel/power/snapenc.c | 158 ++++++++++++++++++++++++++--
> > > kernel/power/snapshot.c | 5 +
> > > kernel/power/user.c | 4 +
> > > kernel/power/user.h | 12 +++
> > > 7 files changed, 185 insertions(+), 11 deletions(-)
> > >
> > > diff --git a/include/uapi/linux/suspend_ioctls.h b/include/uapi/linux/suspend_ioctls.h
> > > index b73026ef824bb9..f93a22eac52dc2 100644
> > > --- a/include/uapi/linux/suspend_ioctls.h
> > > +++ b/include/uapi/linux/suspend_ioctls.h
> > > @@ -25,6 +25,18 @@ struct uswsusp_key_blob {
> > > __u8 nonce[USWSUSP_KEY_NONCE_SIZE];
> > > } __attribute__((packed));
> > >
> > > +/*
> > > + * Allow user mode to fold in key material for the data portion of the hibernate
> > > + * image.
> > > + */
> > > +struct uswsusp_user_key {
> > > + /* Kernel returns the metadata size. */
> > > + __kernel_loff_t meta_size;
> > > + __u32 key_len;
> > > + __u8 key[32];
> >
> > Why is this 32? (Is there a non-literal we can put here?)
>
> Sure, I can make a new define for this: USWSUSP_USER_KEY_SIZE. Really
> it just needs to be enough key material that usermode feels like
> they've swizzled things up enough. I wanted to avoid using a
> particular implementation constant like AES_KEYSIZE_256 because I
> wanted that to be a kernel implementation detail, and also wanted to
> avoid adding additional header dependencies to suspend_ioctls.h.

Can this just use __aligned(8) etc?

--
Kees Cook

2022-11-10 18:48:12

by Evan Green

[permalink] [raw]
Subject: Re: [PATCH v4 09/11] PM: hibernate: Mix user key in encrypted hibernate

On Thu, Nov 10, 2022 at 8:17 AM Kees Cook <[email protected]> wrote:
>
> On Wed, Nov 09, 2022 at 04:30:10PM -0800, Evan Green wrote:
> > On Fri, Nov 4, 2022 at 11:54 AM Kees Cook <[email protected]> wrote:
> > >
> > > On Thu, Nov 03, 2022 at 11:01:17AM -0700, Evan Green wrote:
> > > > Usermode may have their own data protection requirements when it comes
> > > > to encrypting the hibernate image. For example, users may want a policy
> > > > where the hibernate image is protected by a key derived both from
> > > > platform-level security as well as authentication data (such as a
> > > > password or PIN). This way, even if the platform is compromised (ie a
> > > > stolen laptop), sensitive data cannot be exfiltrated via the hibernate
> > > > image without additional data (like the user's password).
> > > >
> > > > The kernel is already doing the encryption, but will be protecting its
> > > > key with the TPM alone. Allow usermode to mix in key content of their own
> > > > for the data portion of the hibernate image, so that the image
> > > > encryption key is determined both by a TPM-backed secret and
> > > > user-defined data.
> > > >
> > > > To mix the user key in, we hash the kernel key followed by the user key,
> > > > and use the resulting hash as the new key. This allows usermode to mix
> > > > in its key material without giving it too much control over what key is
> > > > actually driving the encryption (which might be used to attack the
> > > > secret kernel key).
> > > >
> > > > Limiting this to the data portion allows the kernel to receive the page
> > > > map and prepare its giant allocation even if this user key is not yet
> > > > available (ie the user has not yet finished typing in their password).
> > > > Once the user key becomes available, the data portion can be pushed
> > > > through to the kernel as well. This enables "preloading" scenarios,
> > > > where the hibernate image is loaded off of disk while the additional
> > > > key material (eg password) is being collected.
> > > >
> > > > One annoyance of the "preloading" scheme is that hibernate image memory
> > > > is effectively double-allocated: first by the usermode process pulling
> > > > encrypted contents off of disk and holding it, and second by the kernel
> > > > in its giant allocation in prepare_image(). An interesting future
> > > > optimization would be to allow the kernel to accept and store encrypted
> > > > page data before the user key is available. This would remove the
> > > > double allocation problem, as usermode could push the encrypted pages
> > > > loaded from disk immediately without storing them. The kernel could defer
> > > > decryption of the data until the user key is available, while still
> > > > knowing the correct page locations to store the encrypted data in.
> > > >
> > > > Signed-off-by: Evan Green <[email protected]>
> > > > ---
> > > >
> > > > (no changes since v2)
> > > >
> > > > Changes in v2:
> > > > - Add missing static on snapshot_encrypted_byte_count()
> > > > - Fold in only the used kernel key bytes to the user key.
> > > > - Make the user key length 32 (Eric)
> > > > - Use CRYPTO_LIB_SHA256 for less boilerplate (Eric)
> > > >
> > > > include/uapi/linux/suspend_ioctls.h | 15 ++-
> > > > kernel/power/Kconfig | 1 +
> > > > kernel/power/power.h | 1 +
> > > > kernel/power/snapenc.c | 158 ++++++++++++++++++++++++++--
> > > > kernel/power/snapshot.c | 5 +
> > > > kernel/power/user.c | 4 +
> > > > kernel/power/user.h | 12 +++
> > > > 7 files changed, 185 insertions(+), 11 deletions(-)
> > > >
> > > > diff --git a/include/uapi/linux/suspend_ioctls.h b/include/uapi/linux/suspend_ioctls.h
> > > > index b73026ef824bb9..f93a22eac52dc2 100644
> > > > --- a/include/uapi/linux/suspend_ioctls.h
> > > > +++ b/include/uapi/linux/suspend_ioctls.h
> > > > @@ -25,6 +25,18 @@ struct uswsusp_key_blob {
> > > > __u8 nonce[USWSUSP_KEY_NONCE_SIZE];
> > > > } __attribute__((packed));
> > > >
> > > > +/*
> > > > + * Allow user mode to fold in key material for the data portion of the hibernate
> > > > + * image.
> > > > + */
> > > > +struct uswsusp_user_key {
> > > > + /* Kernel returns the metadata size. */
> > > > + __kernel_loff_t meta_size;
> > > > + __u32 key_len;
> > > > + __u8 key[32];
> > >
> > > Why is this 32? (Is there a non-literal we can put here?)
> >
> > Sure, I can make a new define for this: USWSUSP_USER_KEY_SIZE. Really
> > it just needs to be enough key material that usermode feels like
> > they've swizzled things up enough. I wanted to avoid using a
> > particular implementation constant like AES_KEYSIZE_256 because I
> > wanted that to be a kernel implementation detail, and also wanted to
> > avoid adding additional header dependencies to suspend_ioctls.h.
>
> Can this just use __aligned(8) etc?

It's possible this is more an FFI issue that trails off the end of my
knowledge, so I should just drop the pad. But I'll dump out my
thoughts anyway for posterity:

My understanding is that the compiler pads the size of a struct up to
its required alignment so that arrays of the struct always stay
aligned. In this case, the sizeof() the struct both with and without
the pad member is 0x30, since __kernel_off_t has a required alignment
of 8. I had a couple of worries about that led me to naming that
padding:
* Though this structure isn't copied out of the kernel today, I
didn't want some future change that did it to accidentally leak kernel
memory via the unnamed padding.

* Given that the sizeof the struct is encoded into the ioctl number,
and we're to some extent relying on bespoke compiler behavior, I
thought the padding member might make us more resilient to an
unexpected compiler change later.

* On the usermode side, there are a bunch of Rust rules that I don't
totally understand related to "soundness", undefined values (which the
padding between struct members is), and transmuting structs back and
forth to byte arrays.

I confirmed with someone smarter than me that I'm not running afoul of
the Rust rules by dropping the padding and dropping the __packed I had
in Rust definition of the struct, so I'll plan to drop the pad member
here in the next spin. A very long-winded "OK, will do" :)

-Evan

2022-11-11 20:57:29

by Evan Green

[permalink] [raw]
Subject: Re: [PATCH v4 03/11] tpm: Allow PCR 23 to be restricted to kernel-only use

On Mon, Nov 7, 2022 at 10:15 AM Evan Green <[email protected]> wrote:
>
> On Mon, Nov 7, 2022 at 3:40 AM Jarkko Sakkinen <[email protected]> wrote:
> >
> > On Thu, Nov 03, 2022 at 11:01:11AM -0700, Evan Green wrote:
> > > From: Matthew Garrett <[email protected]>
> > >
> > > Introduce a new Kconfig, TCG_TPM_RESTRICT_PCR, which if enabled
> > > restricts usermode's ability to extend or reset PCR 23.
> > >
> > > Under certain circumstances it might be desirable to enable the creation
> > > of TPM-backed secrets that are only accessible to the kernel. In an
> > > ideal world this could be achieved by using TPM localities, but these
> > > don't appear to be available on consumer systems. An alternative is to
> > > simply block userland from modifying one of the resettable PCRs, leaving
> > > it available to the kernel. If the kernel ensures that no userland can
> > > access the TPM while it is carrying out work, it can reset PCR 23,
> > > extend it to an arbitrary value, create or load a secret, and then reset
> > > the PCR again. Even if userland somehow obtains the sealed material, it
> > > will be unable to unseal it since PCR 23 will never be in the
> > > appropriate state.
> > >
> > > This Kconfig is only properly supported for systems with TPM2 devices.
> > > For systems with TPM1 devices, having this Kconfig enabled completely
> > > restricts usermode's access to the TPM. TPM1 contains support for
> > > tunnelled transports, which usermode could use to smuggle commands
> > > through that this Kconfig is attempting to restrict.
> > >
> > > Link: https://lore.kernel.org/lkml/[email protected]/
> > > Signed-off-by: Matthew Garrett <[email protected]>
> > > Signed-off-by: Evan Green <[email protected]>
> > > ---
> > >
> > > Changes in v4:
> > > - Augment the commit message (Jarkko)
> > >
> > > Changes in v3:
> > > - Fix up commit message (Jarkko)
> > > - tpm2_find_and_validate_cc() was split (Jarkko)
> > > - Simply fully restrict TPM1 since v2 failed to account for tunnelled
> > > transport sessions (Stefan and Jarkko).
> > >
> > > Changes in v2:
> > > - Fixed sparse warnings
> > >
> > > drivers/char/tpm/Kconfig | 12 ++++++++++++
> > > drivers/char/tpm/tpm-dev-common.c | 8 ++++++++
> > > drivers/char/tpm/tpm.h | 19 +++++++++++++++++++
> > > drivers/char/tpm/tpm1-cmd.c | 13 +++++++++++++
> > > drivers/char/tpm/tpm2-cmd.c | 22 ++++++++++++++++++++++
> > > 5 files changed, 74 insertions(+)
> > >
> > > diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
> > > index 927088b2c3d3f2..c8ed54c66e399a 100644
> > > --- a/drivers/char/tpm/Kconfig
> > > +++ b/drivers/char/tpm/Kconfig
> > > @@ -211,4 +211,16 @@ config TCG_FTPM_TEE
> > > This driver proxies for firmware TPM running in TEE.
> > >
> > > source "drivers/char/tpm/st33zp24/Kconfig"
> > > +
> > > +config TCG_TPM_RESTRICT_PCR
> > > + bool "Restrict userland access to PCR 23"
> > > + depends on TCG_TPM
> > > + help
> > > + If set, block userland from extending or resetting PCR 23. This allows it
> > > + to be restricted to in-kernel use, preventing userland from being able to
> > > + make use of data sealed to the TPM by the kernel. This is required for
> > > + secure hibernation support, but should be left disabled if any userland
> > > + may require access to PCR23. This is a TPM2-only feature, and if enabled
> > > + on a TPM1 machine will cause all usermode TPM commands to return EPERM due
> > > + to the complications introduced by tunnelled sessions in TPM1.2.
> > > endif # TCG_TPM
> > > diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
> > > index dc4c0a0a512903..7a4e618c7d1942 100644
> > > --- a/drivers/char/tpm/tpm-dev-common.c
> > > +++ b/drivers/char/tpm/tpm-dev-common.c
> > > @@ -198,6 +198,14 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf,
> > > priv->response_read = false;
> > > *off = 0;
> > >
> > > + if (priv->chip->flags & TPM_CHIP_FLAG_TPM2)
> > > + ret = tpm2_cmd_restricted(priv->chip, priv->data_buffer, size);
> > > + else
> > > + ret = tpm1_cmd_restricted(priv->chip, priv->data_buffer, size);
> > > +
> > > + if (ret)
> > > + goto out;
> > > +
> > > /*
> > > * If in nonblocking mode schedule an async job to send
> > > * the command return the size.
> > > diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
> > > index f1e0f490176f01..c0845e3f9eda17 100644
> > > --- a/drivers/char/tpm/tpm.h
> > > +++ b/drivers/char/tpm/tpm.h
> > > @@ -245,4 +245,23 @@ void tpm_bios_log_setup(struct tpm_chip *chip);
> > > void tpm_bios_log_teardown(struct tpm_chip *chip);
> > > int tpm_dev_common_init(void);
> > > void tpm_dev_common_exit(void);
> > > +
> > > +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> > > +#define TPM_RESTRICTED_PCR 23
> > > +
> > > +int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size);
> > > +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size);
> > > +#else
> > > +static inline int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer,
> > > + size_t size)
> > > +{
> > > + return 0;
> > > +}
> > > +
> > > +static inline int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer,
> > > + size_t size)
> > > +{
> > > + return 0;
> > > +}
> > > +#endif
> > > #endif
> > > diff --git a/drivers/char/tpm/tpm1-cmd.c b/drivers/char/tpm/tpm1-cmd.c
> > > index cf64c738510529..1869e89215fcb9 100644
> > > --- a/drivers/char/tpm/tpm1-cmd.c
> > > +++ b/drivers/char/tpm/tpm1-cmd.c
> > > @@ -811,3 +811,16 @@ int tpm1_get_pcr_allocation(struct tpm_chip *chip)
> > >
> > > return 0;
> > > }
> > > +
> > > +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> > > +int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size)
> > > +{
> > > + /*
> > > + * Restrict all usermode commands on TPM1.2. Ideally we'd just restrict
> > > + * TPM_ORD_PCR_EXTEND and TPM_ORD_PCR_RESET, but TPM1.2 also supports
> > > + * tunnelled transport sessions where the kernel would be unable to filter
> > > + * commands.
> > > + */
> > > + return -EPERM;
> > > +}
> > > +#endif
> > > diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
> > > index 303ce2ea02a4b0..e0503cfd7bcfee 100644
> > > --- a/drivers/char/tpm/tpm2-cmd.c
> > > +++ b/drivers/char/tpm/tpm2-cmd.c
> > > @@ -778,3 +778,25 @@ int tpm2_find_cc(struct tpm_chip *chip, u32 cc)
> > >
> > > return -1;
> > > }
> > > +
> > > +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> > > +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size)
> > > +{
> > > + int cc = tpm2_find_and_validate_cc(chip, NULL, buffer, size);
> > > + __be32 *handle;
> > > +
> > > + switch (cc) {
> > > + case TPM2_CC_PCR_EXTEND:
> > > + case TPM2_CC_PCR_RESET:
> > > + if (size < (TPM_HEADER_SIZE + sizeof(u32)))
> > > + return -EINVAL;
> > > +
> > > + handle = (__be32 *)&buffer[TPM_HEADER_SIZE];
> > > + if (be32_to_cpu(*handle) == TPM_RESTRICTED_PCR)
> > > + return -EPERM;
> > > + break;
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +#endif
> > > --
> > > 2.38.1.431.g37b22c650d-goog
> > >
> >
> > This looks otherwise good but I have still one remark: what is the reason
> > for restricting PCR23 for TPM 1.x?
>
> Mostly I was trying to do the least surprising thing for someone who
> had compiled with this RESTRICT_PCR Kconfig enabled but booted a TPM1
> system. If we do nothing for TPM1, then the encrypted hibernation
> mechanism appears to work fine, but leaves a gaping hole where
> usermode can manipulate PCR23 themselves to create forged encrypted
> hibernate images. Denying all usermode access makes the Kconfig
> correct on TPM1 systems, at the expense of all usermode access (rather
> than just access to PCR23).
>
> An alternative that might be friendlier to users would be to do a
> runtime check in the encrypted hibernate code to simply fail if this
> isn't TPM2. The tradeoff there is that it waters down the Kconfig
> significantly to "RESTRICT_PCR sometimes, if you can, otherwise meh".
> That seemed a bit dangerous, as any future features that may want to
> rely on this Kconfig would have to remember to restrict their support
> to TPM2 as well.

I got talked into revising my stance here, in that breaking usermode
access to TPM1.2 if this Kconfig is set means virtually nobody can
enable this Kconfig. Plus I think doing nothing for TPM1.2 will make
Jarkko happier :). So my new plan is to rename this config to
TCG_TPM2_RESTRICT_PCR, and then try to document very clearly that this
Kconfig only restricts usermode access to the PCR on TPM2.0 devices.
The hibernate code already blocks TPM1.2 devices, so from this series'
perspective the upcoming change should be a no-op.



-Evan

2022-11-24 00:10:38

by Jarkko Sakkinen

[permalink] [raw]
Subject: Re: [PATCH v4 03/11] tpm: Allow PCR 23 to be restricted to kernel-only use

On Mon, Nov 07, 2022 at 10:15:27AM -0800, Evan Green wrote:
> On Mon, Nov 7, 2022 at 3:40 AM Jarkko Sakkinen <[email protected]> wrote:
> >
> > On Thu, Nov 03, 2022 at 11:01:11AM -0700, Evan Green wrote:
> > > From: Matthew Garrett <[email protected]>
> > >
> > > Introduce a new Kconfig, TCG_TPM_RESTRICT_PCR, which if enabled
> > > restricts usermode's ability to extend or reset PCR 23.
> > >
> > > Under certain circumstances it might be desirable to enable the creation
> > > of TPM-backed secrets that are only accessible to the kernel. In an
> > > ideal world this could be achieved by using TPM localities, but these
> > > don't appear to be available on consumer systems. An alternative is to
> > > simply block userland from modifying one of the resettable PCRs, leaving
> > > it available to the kernel. If the kernel ensures that no userland can
> > > access the TPM while it is carrying out work, it can reset PCR 23,
> > > extend it to an arbitrary value, create or load a secret, and then reset
> > > the PCR again. Even if userland somehow obtains the sealed material, it
> > > will be unable to unseal it since PCR 23 will never be in the
> > > appropriate state.
> > >
> > > This Kconfig is only properly supported for systems with TPM2 devices.
> > > For systems with TPM1 devices, having this Kconfig enabled completely
> > > restricts usermode's access to the TPM. TPM1 contains support for
> > > tunnelled transports, which usermode could use to smuggle commands
> > > through that this Kconfig is attempting to restrict.
> > >
> > > Link: https://lore.kernel.org/lkml/[email protected]/
> > > Signed-off-by: Matthew Garrett <[email protected]>
> > > Signed-off-by: Evan Green <[email protected]>
> > > ---
> > >
> > > Changes in v4:
> > > - Augment the commit message (Jarkko)
> > >
> > > Changes in v3:
> > > - Fix up commit message (Jarkko)
> > > - tpm2_find_and_validate_cc() was split (Jarkko)
> > > - Simply fully restrict TPM1 since v2 failed to account for tunnelled
> > > transport sessions (Stefan and Jarkko).
> > >
> > > Changes in v2:
> > > - Fixed sparse warnings
> > >
> > > drivers/char/tpm/Kconfig | 12 ++++++++++++
> > > drivers/char/tpm/tpm-dev-common.c | 8 ++++++++
> > > drivers/char/tpm/tpm.h | 19 +++++++++++++++++++
> > > drivers/char/tpm/tpm1-cmd.c | 13 +++++++++++++
> > > drivers/char/tpm/tpm2-cmd.c | 22 ++++++++++++++++++++++
> > > 5 files changed, 74 insertions(+)
> > >
> > > diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
> > > index 927088b2c3d3f2..c8ed54c66e399a 100644
> > > --- a/drivers/char/tpm/Kconfig
> > > +++ b/drivers/char/tpm/Kconfig
> > > @@ -211,4 +211,16 @@ config TCG_FTPM_TEE
> > > This driver proxies for firmware TPM running in TEE.
> > >
> > > source "drivers/char/tpm/st33zp24/Kconfig"
> > > +
> > > +config TCG_TPM_RESTRICT_PCR
> > > + bool "Restrict userland access to PCR 23"
> > > + depends on TCG_TPM
> > > + help
> > > + If set, block userland from extending or resetting PCR 23. This allows it
> > > + to be restricted to in-kernel use, preventing userland from being able to
> > > + make use of data sealed to the TPM by the kernel. This is required for
> > > + secure hibernation support, but should be left disabled if any userland
> > > + may require access to PCR23. This is a TPM2-only feature, and if enabled
> > > + on a TPM1 machine will cause all usermode TPM commands to return EPERM due
> > > + to the complications introduced by tunnelled sessions in TPM1.2.
> > > endif # TCG_TPM
> > > diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
> > > index dc4c0a0a512903..7a4e618c7d1942 100644
> > > --- a/drivers/char/tpm/tpm-dev-common.c
> > > +++ b/drivers/char/tpm/tpm-dev-common.c
> > > @@ -198,6 +198,14 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf,
> > > priv->response_read = false;
> > > *off = 0;
> > >
> > > + if (priv->chip->flags & TPM_CHIP_FLAG_TPM2)
> > > + ret = tpm2_cmd_restricted(priv->chip, priv->data_buffer, size);
> > > + else
> > > + ret = tpm1_cmd_restricted(priv->chip, priv->data_buffer, size);
> > > +
> > > + if (ret)
> > > + goto out;
> > > +
> > > /*
> > > * If in nonblocking mode schedule an async job to send
> > > * the command return the size.
> > > diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
> > > index f1e0f490176f01..c0845e3f9eda17 100644
> > > --- a/drivers/char/tpm/tpm.h
> > > +++ b/drivers/char/tpm/tpm.h
> > > @@ -245,4 +245,23 @@ void tpm_bios_log_setup(struct tpm_chip *chip);
> > > void tpm_bios_log_teardown(struct tpm_chip *chip);
> > > int tpm_dev_common_init(void);
> > > void tpm_dev_common_exit(void);
> > > +
> > > +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> > > +#define TPM_RESTRICTED_PCR 23
> > > +
> > > +int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size);
> > > +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size);
> > > +#else
> > > +static inline int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer,
> > > + size_t size)
> > > +{
> > > + return 0;
> > > +}
> > > +
> > > +static inline int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer,
> > > + size_t size)
> > > +{
> > > + return 0;
> > > +}
> > > +#endif
> > > #endif
> > > diff --git a/drivers/char/tpm/tpm1-cmd.c b/drivers/char/tpm/tpm1-cmd.c
> > > index cf64c738510529..1869e89215fcb9 100644
> > > --- a/drivers/char/tpm/tpm1-cmd.c
> > > +++ b/drivers/char/tpm/tpm1-cmd.c
> > > @@ -811,3 +811,16 @@ int tpm1_get_pcr_allocation(struct tpm_chip *chip)
> > >
> > > return 0;
> > > }
> > > +
> > > +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> > > +int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size)
> > > +{
> > > + /*
> > > + * Restrict all usermode commands on TPM1.2. Ideally we'd just restrict
> > > + * TPM_ORD_PCR_EXTEND and TPM_ORD_PCR_RESET, but TPM1.2 also supports
> > > + * tunnelled transport sessions where the kernel would be unable to filter
> > > + * commands.
> > > + */
> > > + return -EPERM;
> > > +}
> > > +#endif
> > > diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
> > > index 303ce2ea02a4b0..e0503cfd7bcfee 100644
> > > --- a/drivers/char/tpm/tpm2-cmd.c
> > > +++ b/drivers/char/tpm/tpm2-cmd.c
> > > @@ -778,3 +778,25 @@ int tpm2_find_cc(struct tpm_chip *chip, u32 cc)
> > >
> > > return -1;
> > > }
> > > +
> > > +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR
> > > +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size)
> > > +{
> > > + int cc = tpm2_find_and_validate_cc(chip, NULL, buffer, size);
> > > + __be32 *handle;
> > > +
> > > + switch (cc) {
> > > + case TPM2_CC_PCR_EXTEND:
> > > + case TPM2_CC_PCR_RESET:
> > > + if (size < (TPM_HEADER_SIZE + sizeof(u32)))
> > > + return -EINVAL;
> > > +
> > > + handle = (__be32 *)&buffer[TPM_HEADER_SIZE];
> > > + if (be32_to_cpu(*handle) == TPM_RESTRICTED_PCR)
> > > + return -EPERM;
> > > + break;
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +#endif
> > > --
> > > 2.38.1.431.g37b22c650d-goog
> > >
> >
> > This looks otherwise good but I have still one remark: what is the reason
> > for restricting PCR23 for TPM 1.x?
>
> Mostly I was trying to do the least surprising thing for someone who
> had compiled with this RESTRICT_PCR Kconfig enabled but booted a TPM1
> system. If we do nothing for TPM1, then the encrypted hibernation
> mechanism appears to work fine, but leaves a gaping hole where
> usermode can manipulate PCR23 themselves to create forged encrypted
> hibernate images. Denying all usermode access makes the Kconfig
> correct on TPM1 systems, at the expense of all usermode access (rather
> than just access to PCR23).

OK, I buy this. Can you add inline comment perhaps denoting this?


BR, Jarkko