2018-06-20 09:36:00

by Chen Yu

[permalink] [raw]
Subject: [PATCH 0/3][RFC] Introduce the in-kernel hibernation encryption

Hi,
As security becomes more and more important, we add the in-kernel
encryption support for hibernation.

This prototype is a trial version to implement the hibernation
encryption in the kernel, so that the users do not have to rely
on third-party tools to encrypt the hibernation image. The only
dependency on user space is that, the user space should provide
a valid key derived from passphrase to the kernel for image encryption.

There was a discussion on the mailing list on whether this key should
be derived in kernel or in user space. And it turns out to be generating
the key by user space is more acceptable[1]. So this patch set is divided
into two parts:
1. The hibernation snapshot encryption in kernel space,
2. the key derivation implementation in user space.

Please refer to each patch for detail, and feel free to comment on
this, thanks.

[1] https://www.spinics.net/lists/linux-crypto/msg33145.html

Chen Yu (3):
PM / Hibernate: Add helper functions for hibernation encryption
PM / Hibernate: Encrypt the snapshot pages before submitted to the
block device
tools: create power/crypto utility

MAINTAINERS | 8 +
kernel/power/Kconfig | 13 +
kernel/power/Makefile | 1 +
kernel/power/crypto_hibernation.c | 405 ++++++++++++++++++++++++++++++
kernel/power/power.h | 38 +++
kernel/power/swap.c | 215 +++++++++++++++-
tools/power/crypto/Makefile | 26 ++
tools/power/crypto/crypto_hibernate.c | 447 ++++++++++++++++++++++++++++++++++
8 files changed, 1142 insertions(+), 11 deletions(-)
create mode 100644 kernel/power/crypto_hibernation.c
create mode 100644 tools/power/crypto/Makefile
create mode 100644 tools/power/crypto/crypto_hibernate.c

--
2.7.4



2018-06-20 09:36:19

by Chen Yu

[permalink] [raw]
Subject: [PATCH 3/3][RFC] tools: create power/crypto utility

crypto_hibernate is a user-space utility to generate
512bits AES key and pass it to the kernel via ioctl
for hibernation encryption.(We can also add the key
into kernel via keyctl if necessary, but currently
using ioctl seems to be more straightforward as we
need both the key and salt transit).

The key derivation is based on a simplified implementation
of PBKDF2[1] in e2fsprogs - both the key length and the hash
bytes are the same - 512bits. crypto_hibernate will firstly
probe the user for passphrase and get salt from kernel, then
uses them to generate a 512bits AES key based on PBKDF2.

Usage:
1. install the kernel module:
modprobe crypto_hibernation
2. run this tool to generate the key from
user provided passphrase (salt is needed too).
3. launch the hibernation process, the kernel
uses the key from step 2 to encrypt the
hibernation snapshot.
4. resume the system and the initrd will
launch cryto_hibernate to read previous salt
from kernel and probe the user passphrase
and generate the same key.
5. kernel uses this key to decrypt the hibernation
snapshot and restore to previous system.

[1] https://www.ietf.org/rfc/rfc2898.txt (5.2 PBKDF2)

Suggested-by: "Theodore Ts'o" <[email protected]>
Cc: Rafael J. Wysocki <[email protected]>
Cc: "Theodore Ts'o" <[email protected]>
Cc: Stephan Mueller <[email protected]>
Cc: Eric Biggers <[email protected]>
Cc: Denis Kenzior <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Chen Yu <[email protected]>
---
MAINTAINERS | 8 +
tools/power/crypto/Makefile | 26 ++
tools/power/crypto/crypto_hibernate.c | 447 ++++++++++++++++++++++++++++++++++
3 files changed, 481 insertions(+)
create mode 100644 tools/power/crypto/Makefile
create mode 100644 tools/power/crypto/crypto_hibernate.c

diff --git a/MAINTAINERS b/MAINTAINERS
index cb468a5..e851afb 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6393,6 +6393,14 @@ F: include/linux/freezer.h
F: include/linux/pm.h
F: arch/*/include/asm/suspend*.h

+HIBERNATION CRYPTO UTILITY
+M: "Chen Yu" <[email protected]>
+M: "Rafael J. Wysocki" <[email protected]>
+L: [email protected]
+B: https://bugzilla.kernel.org
+S: Supported
+F: tools/power/crypto/
+
HID CORE LAYER
M: Jiri Kosina <[email protected]>
R: Benjamin Tissoires <[email protected]>
diff --git a/tools/power/crypto/Makefile b/tools/power/crypto/Makefile
new file mode 100644
index 0000000..420301b
--- /dev/null
+++ b/tools/power/crypto/Makefile
@@ -0,0 +1,26 @@
+# SPDX-License-Identifier: GPL-2.0
+CC = $(CROSS_COMPILE)gcc
+BUILD_OUTPUT := $(CURDIR)
+PREFIX ?= /usr
+DESTDIR ?=
+
+ifeq ("$(origin O)", "command line")
+ BUILD_OUTPUT := $(O)
+endif
+
+crypto_hibernate : crypto_hibernate.c
+CFLAGS += -Wall
+
+%: %.c
+ @mkdir -p $(BUILD_OUTPUT)
+ $(CC) $(CFLAGS) $< -o $(BUILD_OUTPUT)/$@
+
+.PHONY : clean
+clean :
+ @rm -f $(BUILD_OUTPUT)/cryphiber
+
+install : cryphiber
+ install -d $(DESTDIR)$(PREFIX)/bin
+ install $(BUILD_OUTPUT)/cryphiber $(DESTDIR)$(PREFIX)/bin/cryphiber
+ install -d $(DESTDIR)$(PREFIX)/share/man/man8
+ install turbostat.8 $(DESTDIR)$(PREFIX)/share/man/man8
diff --git a/tools/power/crypto/crypto_hibernate.c b/tools/power/crypto/crypto_hibernate.c
new file mode 100644
index 0000000..149ba78
--- /dev/null
+++ b/tools/power/crypto/crypto_hibernate.c
@@ -0,0 +1,447 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * The hibernation key based derived function algorithm
+ *
+ * Copyright (C) 2004 Sam Hocevar <[email protected]>
+ * Copyright (C) 2018 Theodore Ts'o <[email protected]>
+ * (copied from e2fsprogs)
+ *
+ * The key derivation is based on a simplified implementation
+ * of PBKDF2 in e2fsprogs - both the key length and the hash
+ * bytes are the same - 512bits. crypto_hibernate will firstly
+ * probe the user for passphrase and salt, then uses them to
+ * generate a 512bits AES key by SHA512 hash and PBKDF2.
+ *
+ * Usage:
+ * 1. install the kernel module:
+ * modprobe crypto_hibernation
+ * 2. run this tool to generate the key from
+ * user provided passphrase (salt is read from kernel)
+ * 3. launch the hibernation process, the kernel
+ * uses the key from step 2 to encrypt the
+ * hibernation snapshot
+ * 4. resume the system and the initrd will
+ * launch cryto_hibernate to read previous salt
+ * from kernel and probe the user passphrase
+ * and generate the same key
+ * 5. kernel uses this key to decrypt the hibernation
+ * snapshot.
+ *
+ * %Begin-Header%
+ * This file may be redistributed under the terms of the GNU Library
+ * General Public License, version 2.
+ * %End-Header%
+ */
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdarg.h>
+#include <string.h>
+#include <stdbool.h>
+#include <assert.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/syscall.h>
+#include <sys/ioctl.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <termios.h>
+
+#if HAVE_SYS_TYPES_H
+#include <sys/types.h>
+#endif
+
+#define PBKDF2_ITERATIONS 0xFFFF
+#define SHA512_BLOCKSIZE 128
+#define SHA512_LENGTH 64
+#define SALT_BYTES 16
+#define SYM_KEY_BYTES SHA512_LENGTH
+#define TOTAL_USER_INFO_LEN (SALT_BYTES+SYM_KEY_BYTES)
+#define MAX_PASSPHRASE_SIZE 1024
+
+struct hibernation_crypto_keys {
+ char derived_key[SYM_KEY_BYTES];
+ char salt[SALT_BYTES];
+ bool valid;
+};
+
+struct hibernation_crypto_keys hib_keys;
+
+static char *get_key_ptr(void)
+{
+ return hib_keys.derived_key;
+}
+
+static char *get_salt_ptr(void)
+{
+ return hib_keys.salt;
+}
+
+static const unsigned int K[64] = {
+ 0x428a2f98UL, 0x71374491UL, 0xb5c0fbcfUL, 0xe9b5dba5UL, 0x3956c25bUL,
+ 0x59f111f1UL, 0x923f82a4UL, 0xab1c5ed5UL, 0xd807aa98UL, 0x12835b01UL,
+ 0x243185beUL, 0x550c7dc3UL, 0x72be5d74UL, 0x80deb1feUL, 0x9bdc06a7UL,
+ 0xc19bf174UL, 0xe49b69c1UL, 0xefbe4786UL, 0x0fc19dc6UL, 0x240ca1ccUL,
+ 0x2de92c6fUL, 0x4a7484aaUL, 0x5cb0a9dcUL, 0x76f988daUL, 0x983e5152UL,
+ 0xa831c66dUL, 0xb00327c8UL, 0xbf597fc7UL, 0xc6e00bf3UL, 0xd5a79147UL,
+ 0x06ca6351UL, 0x14292967UL, 0x27b70a85UL, 0x2e1b2138UL, 0x4d2c6dfcUL,
+ 0x53380d13UL, 0x650a7354UL, 0x766a0abbUL, 0x81c2c92eUL, 0x92722c85UL,
+ 0xa2bfe8a1UL, 0xa81a664bUL, 0xc24b8b70UL, 0xc76c51a3UL, 0xd192e819UL,
+ 0xd6990624UL, 0xf40e3585UL, 0x106aa070UL, 0x19a4c116UL, 0x1e376c08UL,
+ 0x2748774cUL, 0x34b0bcb5UL, 0x391c0cb3UL, 0x4ed8aa4aUL, 0x5b9cca4fUL,
+ 0x682e6ff3UL, 0x748f82eeUL, 0x78a5636fUL, 0x84c87814UL, 0x8cc70208UL,
+ 0x90befffaUL, 0xa4506cebUL, 0xbef9a3f7UL, 0xc67178f2UL
+};
+
+/* Various logical functions */
+#define Ch(x,y,z) (z ^ (x & (y ^ z)))
+#define Maj(x,y,z) (((x | y) & z) | (x & y))
+#define S(x, n) RORc((x),(n))
+#define R(x, n) (((x)&0xFFFFFFFFUL)>>(n))
+#define Sigma0(x) (S(x, 2) ^ S(x, 13) ^ S(x, 22))
+#define Sigma1(x) (S(x, 6) ^ S(x, 11) ^ S(x, 25))
+#define Gamma0(x) (S(x, 7) ^ S(x, 18) ^ R(x, 3))
+#define Gamma1(x) (S(x, 17) ^ S(x, 19) ^ R(x, 10))
+#define RORc(x, y) ( ((((unsigned int)(x)&0xFFFFFFFFUL)>>(unsigned int)((y)&31)) | ((unsigned int)(x)<<(unsigned int)(32-((y)&31)))) & 0xFFFFFFFFUL)
+
+#define RND(a,b,c,d,e,f,g,h,i) \
+ t0 = h + Sigma1(e) + Ch(e, f, g) + K[i] + W[i]; \
+ t1 = Sigma0(a) + Maj(a, b, c); \
+ d += t0; \
+ h = t0 + t1;
+
+#define STORE64H(x, y) \
+ do { \
+ (y)[0] = (unsigned char)(((x)>>56)&255);\
+ (y)[1] = (unsigned char)(((x)>>48)&255);\
+ (y)[2] = (unsigned char)(((x)>>40)&255);\
+ (y)[3] = (unsigned char)(((x)>>32)&255);\
+ (y)[4] = (unsigned char)(((x)>>24)&255);\
+ (y)[5] = (unsigned char)(((x)>>16)&255);\
+ (y)[6] = (unsigned char)(((x)>>8)&255);\
+ (y)[7] = (unsigned char)((x)&255); } while(0)
+
+#define STORE32H(x, y) \
+ do { (y)[0] = (unsigned char)(((x)>>24)&255); (y)[1] = (unsigned char)(((x)>>16)&255); \
+ (y)[2] = (unsigned char)(((x)>>8)&255); (y)[3] = (unsigned char)((x)&255); } while(0)
+
+#define LOAD32H(x, y) \
+ do { x = ((unsigned int)((y)[0] & 255)<<24) | \
+ ((unsigned int)((y)[1] & 255)<<16) | \
+ ((unsigned int)((y)[2] & 255)<<8) | \
+ ((unsigned int)((y)[3] & 255)); } while(0)
+
+struct sha512_state {
+ unsigned long long length;
+ unsigned int state[8], curlen;
+ unsigned char buf[64];
+};
+
+/* This is a highly simplified version from libtomcrypt */
+struct hash_state {
+ struct sha512_state sha512;
+};
+
+static void sha512_compress(struct hash_state * md, const unsigned char *buf)
+{
+ unsigned int S[8], W[64], t0, t1;
+ unsigned int t;
+ int i;
+
+ /* copy state into S */
+ for (i = 0; i < 8; i++) {
+ S[i] = md->sha512.state[i];
+ }
+
+ /* copy the state into 512-bits into W[0..15] */
+ for (i = 0; i < 16; i++) {
+ LOAD32H(W[i], buf + (4*i));
+ }
+
+ /* fill W[16..63] */
+ for (i = 16; i < 64; i++) {
+ W[i] = Gamma1(W[i - 2]) + W[i - 7] + Gamma0(W[i - 15]) + W[i - 16];
+ }
+
+ /* Compress */
+ for (i = 0; i < 64; ++i) {
+ RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],i);
+ t = S[7]; S[7] = S[6]; S[6] = S[5]; S[5] = S[4];
+ S[4] = S[3]; S[3] = S[2]; S[2] = S[1]; S[1] = S[0]; S[0] = t;
+ }
+
+ /* feedback */
+ for (i = 0; i < 8; i++) {
+ md->sha512.state[i] = md->sha512.state[i] + S[i];
+ }
+}
+
+static void sha512_init(struct hash_state * md)
+{
+ md->sha512.curlen = 0;
+ md->sha512.length = 0;
+ md->sha512.state[0] = 0x6A09E667UL;
+ md->sha512.state[1] = 0xBB67AE85UL;
+ md->sha512.state[2] = 0x3C6EF372UL;
+ md->sha512.state[3] = 0xA54FF53AUL;
+ md->sha512.state[4] = 0x510E527FUL;
+ md->sha512.state[5] = 0x9B05688CUL;
+ md->sha512.state[6] = 0x1F83D9ABUL;
+ md->sha512.state[7] = 0x5BE0CD19UL;
+}
+
+#define MIN(x, y) ( ((x)<(y))?(x):(y) )
+
+static void sha512_process(struct hash_state * md, const unsigned char *in, unsigned long inlen)
+{
+ unsigned long n;
+
+ while (inlen > 0) {
+ if (md->sha512.curlen == 0 && inlen >= SHA512_BLOCKSIZE) {
+ sha512_compress(md, in);
+ md->sha512.length += SHA512_BLOCKSIZE * 8;
+ in += SHA512_BLOCKSIZE;
+ inlen -= SHA512_BLOCKSIZE;
+ } else {
+ n = MIN(inlen, (SHA512_BLOCKSIZE - md->sha512.curlen));
+ memcpy(md->sha512.buf + md->sha512.curlen, in, (size_t)n);
+ md->sha512.curlen += n;
+ in += n;
+ inlen -= n;
+ if (md->sha512.curlen == SHA512_BLOCKSIZE) {
+ sha512_compress(md, md->sha512.buf);
+ md->sha512.length += 8*SHA512_BLOCKSIZE;
+ md->sha512.curlen = 0;
+ }
+ }
+ }
+}
+
+static void sha512_done(struct hash_state * md, unsigned char *out)
+{
+ int i;
+
+ /* increase the length of the message */
+ md->sha512.length += md->sha512.curlen * 8;
+
+ /* append the '1' bit */
+ md->sha512.buf[md->sha512.curlen++] = (unsigned char)0x80;
+
+ /* if the length is currently above 56 bytes we append zeros
+ * then compress. Then we can fall back to padding zeros and length
+ * encoding like normal.
+ */
+ if (md->sha512.curlen > 56) {
+ while (md->sha512.curlen < 64) {
+ md->sha512.buf[md->sha512.curlen++] = (unsigned char)0;
+ }
+ sha512_compress(md, md->sha512.buf);
+ md->sha512.curlen = 0;
+ }
+
+ /* pad upto 56 bytes of zeroes */
+ while (md->sha512.curlen < 56) {
+ md->sha512.buf[md->sha512.curlen++] = (unsigned char)0;
+ }
+
+ /* store length */
+ STORE64H(md->sha512.length, md->sha512.buf+56);
+ sha512_compress(md, md->sha512.buf);
+
+ /* copy output */
+ for (i = 0; i < 8; i++) {
+ STORE32H(md->sha512.state[i], out+(4*i));
+ }
+}
+
+void start_sha512(const unsigned char *in, unsigned long in_size,
+ unsigned char out[SHA512_LENGTH])
+{
+ struct hash_state md;
+
+ sha512_init(&md);
+ sha512_process(&md, in, in_size);
+ sha512_done(&md, out);
+}
+
+static void pbkdf2_sha512(const char *passphrase, const char *salt,
+ unsigned int count,
+ char *derived_key)
+{
+ size_t passphrase_size = strlen(passphrase);
+ unsigned char buf[SHA512_LENGTH + MAX_PASSPHRASE_SIZE] = {0};
+ unsigned char tempbuf[SHA512_LENGTH] = {0};
+ char final[SHA512_LENGTH] = {0};
+ unsigned char saltbuf[SALT_BYTES + MAX_PASSPHRASE_SIZE] = {0};
+ int actual_buf_len = SHA512_LENGTH + passphrase_size;
+ int actual_saltbuf_len = SALT_BYTES + passphrase_size;
+ unsigned int x, y;
+ unsigned int *final_u32 = (unsigned int *)final;
+ unsigned int *temp_u32 = (unsigned int *)tempbuf;
+
+ memcpy(saltbuf, salt, SALT_BYTES);
+ memcpy(&saltbuf[SALT_BYTES], passphrase, passphrase_size);
+ memcpy(&buf[SHA512_LENGTH], passphrase, passphrase_size);
+
+ for (x = 0; x < count; ++x) {
+ if (x == 0) {
+ start_sha512(saltbuf, actual_saltbuf_len, tempbuf);
+ } else {
+ /*
+ * buf: [previous hash || passphrase]
+ */
+ memcpy(buf, tempbuf, SHA512_LENGTH);
+ start_sha512(buf, actual_buf_len, tempbuf);
+ }
+ for (y = 0; y < (sizeof(final) / sizeof(*final_u32)); ++y)
+ final_u32[y] = final_u32[y] ^ temp_u32[y];
+ }
+ memcpy(derived_key, final, SYM_KEY_BYTES);
+}
+
+#define HIBERNATE_SALT_READ _IOW('C', 3, struct hibernation_crypto_keys)
+#define HIBERNATE_KEY_WRITE _IOW('C', 4, struct hibernation_crypto_keys)
+
+static int disable_echo(struct termios *saved_settings)
+{
+ struct termios current_settings;
+ int rc = 0;
+
+ rc = tcgetattr(0, &current_settings);
+ if (rc)
+ return rc;
+ *saved_settings = current_settings;
+ current_settings.c_lflag &= ~ECHO;
+ rc = tcsetattr(0, TCSANOW, &current_settings);
+
+ return rc;
+}
+
+static void get_passphrase(char *passphrase, int len)
+{
+ char *p;
+ struct termios current_settings;
+
+ assert(len > 0);
+ disable_echo(&current_settings);
+ p = fgets(passphrase, len, stdin);
+ tcsetattr(0, TCSANOW, &current_settings);
+ printf("\n");
+ if (!p) {
+ printf("Aborting.\n");
+ exit(1);
+ }
+ p = strrchr(passphrase, '\n');
+ if (!p)
+ p = passphrase + len - 1;
+ *p = '\0';
+}
+
+#define CRYPTO_FILE "/dev/crypto_hibernate"
+
+static int write_keys(void)
+{
+ int fd;
+
+ fd = open(CRYPTO_FILE, O_RDWR);
+ if (fd < 0) {
+ printf("Cannot open device file...\n");
+ return -EINVAL;
+ }
+ ioctl(fd, HIBERNATE_KEY_WRITE, get_key_ptr());
+ return 0;
+}
+
+static int read_salt(void)
+{
+ int fd;
+
+ fd = open(CRYPTO_FILE, O_RDWR);
+ if (fd < 0) {
+ printf("Cannot open device file...\n");
+ return -EINVAL;
+ }
+ ioctl(fd, HIBERNATE_SALT_READ, get_salt_ptr());
+ return 0;
+}
+
+int key_derive_from_passphrase(const char *pass)
+{
+ unsigned int pass_len = strlen(pass);
+
+ if (pass_len > MAX_PASSPHRASE_SIZE) {
+ printf("Passphrase size is %d; max is %d.\n", pass_len,
+ MAX_PASSPHRASE_SIZE);
+ exit(1);
+ }
+
+ /* Need to get salt from
+ * kernel first.
+ */
+ if (read_salt())
+ exit(1);
+ /* Store the derived key in result buf. */
+ pbkdf2_sha512(pass, get_salt_ptr(), PBKDF2_ITERATIONS, get_key_ptr());
+ if (write_keys())
+ exit(1);
+
+ return 0;
+}
+
+void help(void)
+{
+ printf(
+ "Usage: crypto_hibernate [OPTIONS]\n"
+ "-p passphrase [probed from user if not given]\n"
+ "-s salt [read from kernel if not given]\n");
+}
+
+int main(int argc, char *argv[])
+{
+ int opt, option_index = 0;
+ char in_passphrase[MAX_PASSPHRASE_SIZE];
+
+ while ((opt = getopt_long_only(argc, argv, "+p:s:h",
+ NULL, &option_index)) != -1) {
+ switch (opt) {
+ case 'p':
+ {
+ char *p = optarg;
+
+ if (strlen(p) >= (MAX_PASSPHRASE_SIZE - 1)) {
+ printf("Please provide passphrase less than %d bytes.\n",
+ MAX_PASSPHRASE_SIZE);
+ exit(1);
+ }
+ strcpy(in_passphrase, p);
+ }
+ break;
+ case 's':
+ {
+ char *p = optarg;
+
+ if (strlen(p) != (SALT_BYTES - 1)) {
+ printf("Please provide salt with len less than %d bytes.\n",
+ SALT_BYTES);
+ exit(1);
+ }
+ strcpy(get_salt_ptr(), p);
+ }
+ break;
+ case 'h':
+ default:
+ help();
+ exit(1);
+ }
+ }
+
+ printf("Enter passphrase (echo disabled): ");
+ get_passphrase(in_passphrase, sizeof(in_passphrase));
+
+ if (key_derive_from_passphrase(in_passphrase))
+ exit(1);
+
+ return 0;
+}
--
2.7.4


2018-06-20 09:36:38

by Chen Yu

[permalink] [raw]
Subject: [PATCH 1/3][RFC] PM / Hibernate: Add helper functions for hibernation encryption

Basically, this solution is to encrypt the pages before
they go to the block device.

There was a discussion on the mailing list on whether
the key should be derived in kernel or in user space.
And it turns out to be generating the key by user space
is more applicable[1]. So the procedure is illustrated below:

1. The user space reads the salt from kernel and
generates a symmetrical (AES) key based on user
passphrase. Then the kernel uses that key to
encrypt the hibernation image.
2. The salt will be saved in image header and passed to
the restore kernel.
3. During restore, the userspace reads the salt
from the kernel and probe passphrase from the user
to generate the same key and pass that key back to kernel.
4. The restore kernel uses that key to decrypt the image.

Generally the advantage is: Users do not have to
encrypt the whole swap partition as other tools.
After all, ideally kernel memory should be encrypted
by the kernel itself.

[1] https://www.spinics.net/lists/linux-crypto/msg33145.html

Suggested-by: Rafael J. Wysocki <[email protected]>
Cc: Rafael J. Wysocki <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Lee, Chun-Yi" <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Chen Yu <[email protected]>
---
kernel/power/Kconfig | 13 ++
kernel/power/Makefile | 1 +
kernel/power/crypto_hibernation.c | 405 ++++++++++++++++++++++++++++++++++++++
kernel/power/power.h | 37 ++++
4 files changed, 456 insertions(+)
create mode 100644 kernel/power/crypto_hibernation.c

diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
index e880ca2..9a23572 100644
--- a/kernel/power/Kconfig
+++ b/kernel/power/Kconfig
@@ -101,6 +101,19 @@ config PM_STD_PARTITION
suspended image to. It will simply pick the first available swap
device.

+config CRYPTO_HIBERNATION
+ tristate "Encryption of snapshot for hibernation"
+ depends on HIBERNATION && CRYPTO_AES && KEYS
+ default n
+ help
+ Allow the kernel to encrypt the snapshot data based on
+ user provided passphrase. The user should provide a valid
+ symmetrical key to the kernel via ioctl, so the kernel
+ will use that key to encrypt the hibernation snapshot pages.
+ A typical tool can be found under tools/power/crypto/.
+
+ If in doubt, say N.
+
config PM_SLEEP
def_bool y
depends on SUSPEND || HIBERNATE_CALLBACKS
diff --git a/kernel/power/Makefile b/kernel/power/Makefile
index a3f79f0e..52c68a4 100644
--- a/kernel/power/Makefile
+++ b/kernel/power/Makefile
@@ -11,6 +11,7 @@ obj-$(CONFIG_FREEZER) += process.o
obj-$(CONFIG_SUSPEND) += suspend.o
obj-$(CONFIG_PM_TEST_SUSPEND) += suspend_test.o
obj-$(CONFIG_HIBERNATION) += hibernate.o snapshot.o swap.o user.o
+obj-$(CONFIG_CRYPTO_HIBERNATION) += crypto_hibernation.o
obj-$(CONFIG_PM_AUTOSLEEP) += autosleep.o
obj-$(CONFIG_PM_WAKELOCKS) += wakelock.o

diff --git a/kernel/power/crypto_hibernation.c b/kernel/power/crypto_hibernation.c
new file mode 100644
index 0000000..5703cce
--- /dev/null
+++ b/kernel/power/crypto_hibernation.c
@@ -0,0 +1,405 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * linux/kernel/power/crypto_hibernation.c
+ *
+ * This file provides in-kernel encrypted hibernation support.
+ *
+ * Copyright (c) 2018, Intel Corporation.
+ * Copyright (c) 2018, Rafael J. Wysocki <[email protected]>
+ * Copyright (c) 2018, Chen Yu <[email protected]>
+ *
+ * Basically, this solution encrypts the pages before they go to
+ * the block device, the procedure is illustrated below:
+ * 1. The user space reads the salt from the kernel, generates
+ * a symmetrical (AES)key, the kernel uses that key to encrypt the
+ * hibernation image.
+ * 2. The salt is saved in image header and passed to
+ * the restore kernel.
+ * 3. During restore, the userspace needs to read the salt
+ * from the kernel and probe passphrase from the user
+ * to generate the key and pass that key back to kernel.
+ * 4. The restore kernel uses that key to decrypt the image.
+ *
+ * Generally the advantage is: Users DO NOT have to
+ * encrypt the whole swap partition as other tools.
+ * After all, ideally kernel memory should be encrypted
+ * by the kernel itself.
+ */
+#define pr_fmt(fmt) "PM: " fmt
+
+#include <linux/export.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/cred.h>
+#include <linux/err.h>
+#include <linux/scatterlist.h>
+#include <linux/random.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/cdev.h>
+#include <crypto/skcipher.h>
+#include <crypto/akcipher.h>
+#include <crypto/aes.h>
+#include <crypto/hash.h>
+#include <crypto/sha.h>
+#include <linux/major.h>
+#include "power.h"
+
+static int crypto_data(const char *inbuf,
+ int inlen,
+ char *outbuf,
+ int outlen,
+ bool encrypt,
+ int page_idx);
+static void crypto_save(void *buf);
+static void crypto_restore(void *buf);
+static int crypto_init(bool suspend);
+
+/* help function hooks */
+static struct hibernation_crypto hib_crypto = {
+ .crypto_data = crypto_data,
+ .save = crypto_save,
+ .restore = crypto_restore,
+ .init = crypto_init,
+};
+
+/* return the key value. */
+static char *get_key_ptr(void)
+{
+ return hib_crypto.keys.derived_key;
+}
+
+/* return the salt value. */
+static char *get_salt_ptr(void)
+{
+ return hib_crypto.keys.salt;
+}
+
+/**
+ * crypto_data() - en/decrypt the data
+ * @inbuf: the source buffer
+ * @inlen: the length of source buffer
+ * @outbuf: the dest buffer
+ * @outlen: the length of dest buffer
+ * @encrypt: encrypt or decrypt
+ * @page_idx: the index of that page been manipulated
+ *
+ * Return: 0 on success, non-zero for other cases.
+ *
+ * Better use SKCIPHER_REQUEST_ON_STACK to support multi-thread
+ * encryption, however hibernation does not support multi-threaded
+ * swap page write out due to the fact that the swap_map has to be
+ * accessed sequently.
+ */
+static int crypto_data(const char *inbuf,
+ int inlen,
+ char *outbuf,
+ int outlen,
+ bool encrypt,
+ int page_idx)
+{
+ struct scatterlist src, dst;
+ int ret;
+ struct {
+ __le64 idx;
+ u8 padding[HIBERNATE_IV_SIZE - sizeof(__le64)];
+ } iv;
+
+ iv.idx = cpu_to_le64(page_idx);
+ memset(iv.padding, 0, sizeof(iv.padding));
+
+ /*
+ * Do a AES-256 encryption on every page-index
+ * to generate the IV.
+ */
+ crypto_cipher_encrypt_one(hib_crypto.essiv_tfm, (u8 *)&iv,
+ (u8 *)&iv);
+ sg_init_one(&src, inbuf, inlen);
+ sg_init_one(&dst, outbuf, outlen);
+ skcipher_request_set_crypt(hib_crypto.req_sk,
+ &src, &dst, outlen, &iv);
+
+ if (encrypt)
+ ret = crypto_skcipher_encrypt(hib_crypto.req_sk);
+ else
+ ret = crypto_skcipher_decrypt(hib_crypto.req_sk);
+ if (ret)
+ pr_err("%s %scrypt failed: %d\n", __func__,
+ encrypt ? "en" : "de", ret);
+
+ return ret;
+}
+
+/* Invoked across hibernate/restore. */
+static void crypto_save(void *buf)
+{
+ memcpy(buf, get_salt_ptr(), HIBERNATE_SALT_BYTES);
+}
+
+static void crypto_restore(void *buf)
+{
+ memcpy(get_salt_ptr(), buf, HIBERNATE_SALT_BYTES);
+}
+
+static int init_crypto_helper(void)
+{
+ int ret = 0;
+
+ /* Symmetric encryption initialization. */
+ if (!hib_crypto.tfm_sk) {
+ hib_crypto.tfm_sk =
+ crypto_alloc_skcipher("xts(aes)", 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(hib_crypto.tfm_sk)) {
+ pr_err("Failed to load transform for aes: %ld\n",
+ PTR_ERR(hib_crypto.tfm_sk));
+ return -ENOMEM;
+ }
+ }
+
+ if (!hib_crypto.req_sk) {
+ hib_crypto.req_sk =
+ skcipher_request_alloc(hib_crypto.tfm_sk, GFP_KERNEL);
+ if (!hib_crypto.req_sk) {
+ pr_err("Failed to allocate request\n");
+ ret = -ENOMEM;
+ goto free_tfm_sk;
+ }
+ }
+ skcipher_request_set_callback(hib_crypto.req_sk, 0, NULL, NULL);
+
+ /* Switch to the image key, and prepare for page en/decryption. */
+ ret = crypto_skcipher_setkey(hib_crypto.tfm_sk, get_key_ptr(),
+ HIBERNATE_KEY_BYTES);
+ if (ret) {
+ pr_err("Failed to set the image key. (%d)\n", ret);
+ goto free_req_sk;
+ }
+
+ return 0;
+
+ free_req_sk:
+ skcipher_request_free(hib_crypto.req_sk);
+ hib_crypto.req_sk = NULL;
+ free_tfm_sk:
+ crypto_free_skcipher(hib_crypto.tfm_sk);
+ hib_crypto.tfm_sk = NULL;
+ return ret;
+}
+
+static void exit_crypto_helper(void)
+{
+ crypto_free_skcipher(hib_crypto.tfm_sk);
+ hib_crypto.tfm_sk = NULL;
+ skcipher_request_free(hib_crypto.req_sk);
+ hib_crypto.req_sk = NULL;
+}
+
+/*
+ * Copied from init_essiv_generator().
+ * Using SHA256 to derive the key and
+ * save it.
+ */
+static int init_iv_generator(const u8 *raw_key, int keysize)
+{
+ int ret = -EINVAL;
+ u8 salt[SHA256_DIGEST_SIZE];
+
+ /* 1. IV generator initialization. */
+ if (!hib_crypto.essiv_hash_tfm) {
+ hib_crypto.essiv_hash_tfm = crypto_alloc_shash("sha256", 0, 0);
+ if (IS_ERR(hib_crypto.essiv_hash_tfm)) {
+ pr_err("crypto_hibernate: error allocating SHA-256 transform for IV: %ld\n",
+ PTR_ERR(hib_crypto.essiv_hash_tfm));
+ return -ENOMEM;
+ }
+ }
+
+ if (!hib_crypto.essiv_tfm) {
+ hib_crypto.essiv_tfm = crypto_alloc_cipher("aes", 0, 0);
+ if (IS_ERR(hib_crypto.essiv_tfm)) {
+ pr_err("crypto_hibernate: error allocating cipher aes for IV generation: %ld\n",
+ PTR_ERR(hib_crypto.essiv_tfm));
+ ret = -ENOMEM;
+ goto free_essiv_hash;
+ }
+ }
+
+ {
+ /* 2. Using hash to generate the 256bits AES key */
+ SHASH_DESC_ON_STACK(desc, hib_crypto.essiv_hash_tfm);
+
+ desc->tfm = hib_crypto.essiv_hash_tfm;
+ desc->flags = 0;
+ ret = crypto_shash_digest(desc, raw_key, keysize, salt);
+ if (ret) {
+ pr_err("crypto_hibernate: error get digest for raw_key\n");
+ goto free_essiv_hash;
+ }
+ }
+ /* 3. Switch to the 256bits AES key for later IV generation. */
+ ret = crypto_cipher_setkey(hib_crypto.essiv_tfm, salt, sizeof(salt));
+
+ free_essiv_hash:
+ crypto_free_shash(hib_crypto.essiv_hash_tfm);
+ hib_crypto.essiv_hash_tfm = NULL;
+ return ret;
+}
+
+/*
+ * Either invoked during hibernate or restore.
+ */
+static int crypto_init(bool suspend)
+{
+ int ret = 0;
+
+ pr_info("Prepared to %scrypt the image data.\n",
+ suspend ? "en" : "de");
+ if (!hib_crypto.keys.user_key_valid) {
+ pr_err("Need to get user provided key first!(via ioctl)\n");
+ return -EINVAL;
+ }
+
+ ret = init_crypto_helper();
+ if (ret) {
+ pr_err("Failed to initialize basic crypto helpers. (%d)\n",
+ ret);
+ return ret;
+ }
+ ret = init_iv_generator(get_key_ptr(),
+ HIBERNATE_KEY_BYTES);
+ if (ret) {
+ pr_err("Failed to init the iv generator. (%d)\n", ret);
+ goto out_helper;
+ }
+
+ pr_info("Key generated, waiting for data encryption/decrytion.\n");
+ return 0;
+
+ out_helper:
+ exit_crypto_helper();
+ return ret;
+}
+
+/* key/salt probing via ioctl. */
+dev_t crypto_dev;
+static struct class *crypto_dev_class;
+static struct cdev crypto_cdev;
+
+#define HIBERNATE_SALT_READ _IOW('C', 3, struct hibernation_crypto_keys)
+#define HIBERNATE_KEY_WRITE _IOW('C', 4, struct hibernation_crypto_keys)
+
+static DEFINE_MUTEX(crypto_mutex);
+
+static long crypto_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ int ret;
+
+ mutex_lock(&crypto_mutex);
+ switch (cmd) {
+ case HIBERNATE_SALT_READ:
+ if (copy_to_user((void __user *)arg,
+ get_salt_ptr(),
+ HIBERNATE_SALT_BYTES))
+ ret = -EFAULT;
+ break;
+ case HIBERNATE_KEY_WRITE:
+ if (copy_from_user(get_key_ptr(),
+ (void __user *)arg,
+ HIBERNATE_KEY_BYTES))
+ ret = -EFAULT;
+ hib_crypto.keys.user_key_valid = true;
+ break;
+ default:
+ break;
+ }
+ mutex_unlock(&crypto_mutex);
+
+ return ret;
+}
+
+static int crypto_open(struct inode *inode, struct file *file)
+{
+ return 0;
+}
+
+static int crypto_release(struct inode *inode, struct file *file)
+{
+ return 0;
+}
+
+static const struct file_operations crypto_fops = {
+ .owner = THIS_MODULE,
+ .unlocked_ioctl = crypto_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = crypto_ioctl,
+#endif
+ .open = crypto_open,
+ .release = crypto_release,
+ .llseek = noop_llseek,
+};
+
+static inline void prepare_crypto_ioctl(void)
+{
+ /* generate the random salt */
+ get_random_bytes(get_salt_ptr(), HIBERNATE_SALT_BYTES);
+ /* install the hibernation hooks */
+ set_hibernation_ops(&hib_crypto);
+}
+
+static int crypto_hibernate_init(void)
+{
+ if ((alloc_chrdev_region(&crypto_dev, 0, 1, "crypto")) < 0) {
+ pr_err("Cannot allocate major number for crypto hibernate.\n");
+ return -ENOMEM;
+ }
+
+ cdev_init(&crypto_cdev, &crypto_fops);
+ crypto_cdev.owner = THIS_MODULE;
+ crypto_cdev.ops = &crypto_fops;
+
+ if ((cdev_add(&crypto_cdev, crypto_dev, 1)) < 0) {
+ pr_err("Cannot add the crypto device.\n");
+ goto r_chrdev;
+ }
+
+ crypto_dev_class = class_create(THIS_MODULE,
+ "crypto_class");
+ if (crypto_dev_class == NULL) {
+ pr_err("Cannot create the crypto_class.\n");
+ goto r_cdev;
+ }
+
+ if ((device_create(crypto_dev_class, NULL, crypto_dev, NULL,
+ "crypto_hibernate")) == NULL){
+ pr_err("Cannot create the crypto device node.\n");
+ goto r_device;
+ }
+ prepare_crypto_ioctl();
+
+ return 0;
+
+ r_device:
+ class_destroy(crypto_dev_class);
+ r_cdev:
+ cdev_del(&crypto_cdev);
+ r_chrdev:
+ unregister_chrdev_region(crypto_dev, 1);
+ return -EINVAL;
+}
+
+static void crypto_hibernate_exit(void)
+{
+ set_hibernation_ops(NULL);
+ device_destroy(crypto_dev_class, crypto_dev);
+ class_destroy(crypto_dev_class);
+ cdev_del(&crypto_cdev);
+ unregister_chrdev_region(crypto_dev, 1);
+}
+
+MODULE_AUTHOR("Chen Yu <[email protected]>");
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Hibernatin crypto facility");
+
+module_init(crypto_hibernate_init);
+module_exit(crypto_hibernate_exit);
diff --git a/kernel/power/power.h b/kernel/power/power.h
index 9e58bdc..660aac3 100644
--- a/kernel/power/power.h
+++ b/kernel/power/power.h
@@ -69,6 +69,43 @@ extern void enable_restore_image_protection(void);
static inline void enable_restore_image_protection(void) {}
#endif /* CONFIG_STRICT_KERNEL_RWX */

+#if IS_ENABLED(CONFIG_CRYPTO_HIBERNATION)
+#define HIBERNATE_SALT_BYTES 16
+#define HIBERNATE_KEY_BYTES 64
+#define HIBERNATE_IV_SIZE 16
+#define TOTAL_USER_INFO_LEN (HIBERNATE_SALT_BYTES+HIBERNATE_KEY_BYTES)
+
+struct hibernation_crypto_keys {
+ char derived_key[HIBERNATE_KEY_BYTES];
+ char salt[HIBERNATE_SALT_BYTES];
+ bool user_key_valid;
+};
+
+struct hibernation_crypto {
+ /* For data encryption */
+ struct crypto_skcipher *tfm_sk;
+ struct skcipher_request *req_sk;
+
+ /* For IV generation */
+ struct crypto_cipher *essiv_tfm;
+ struct crypto_shash *essiv_hash_tfm;
+
+ struct hibernation_crypto_keys keys;
+
+ int (*crypto_data)(const char *inbuf, int inlen,
+ char *outbuf, int outlen,
+ bool encrypt, int page_idx);
+ void (*save)(void *buf);
+ void (*restore)(void *buf);
+ int (*init)(bool suspend);
+};
+
+extern void set_hibernation_ops(struct hibernation_crypto *ops);
+
+#else
+#define HIBERNATE_SALT_BYTES 0
+#endif
+
#else /* !CONFIG_HIBERNATION */

static inline void hibernate_reserved_size_init(void) {}
--
2.7.4


2018-06-20 09:36:47

by Chen Yu

[permalink] [raw]
Subject: [PATCH 2/3][RFC] PM / Hibernate: Encrypt the snapshot pages before submitted to the block device

Use the helper functions introduced previously to encrypt
the page data before they are submitted to the block device.
Besides, for the case of hibernation compression, the data
are firstly compressed and then encrypted, and vice versa
for the resume process.

Suggested-by: Rafael J. Wysocki <[email protected]>
Cc: Rafael J. Wysocki <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "Lee, Chun-Yi" <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Chen Yu <[email protected]>
---
kernel/power/power.h | 1 +
kernel/power/swap.c | 215 ++++++++++++++++++++++++++++++++++++++++++++++++---
2 files changed, 205 insertions(+), 11 deletions(-)

diff --git a/kernel/power/power.h b/kernel/power/power.h
index 660aac3..637695c 100644
--- a/kernel/power/power.h
+++ b/kernel/power/power.h
@@ -207,6 +207,7 @@ extern int swsusp_swap_in_use(void);
#define SF_PLATFORM_MODE 1
#define SF_NOCOMPRESS_MODE 2
#define SF_CRC32_MODE 4
+#define SF_ENCRYPT_MODE 8

/* kernel/power/hibernate.c */
extern int swsusp_check(void);
diff --git a/kernel/power/swap.c b/kernel/power/swap.c
index c2bcf97..2b6b3d0 100644
--- a/kernel/power/swap.c
+++ b/kernel/power/swap.c
@@ -102,14 +102,16 @@ struct swap_map_handle {
unsigned int k;
unsigned long reqd_free_pages;
u32 crc32;
+ bool crypto;
};

struct swsusp_header {
char reserved[PAGE_SIZE - 20 - sizeof(sector_t) - sizeof(int) -
- sizeof(u32)];
+ sizeof(u32) - HIBERNATE_SALT_BYTES];
u32 crc32;
sector_t image;
unsigned int flags; /* Flags to pass to the "boot" kernel */
+ char salt[HIBERNATE_SALT_BYTES];
char orig_sig[10];
char sig[10];
} __packed;
@@ -127,6 +129,53 @@ struct swsusp_extent {
unsigned long end;
};

+/* For encryption/decryption. */
+static struct hibernation_crypto *hibernation_crypto_ops;
+
+void set_hibernation_ops(struct hibernation_crypto *ops)
+{
+ hibernation_crypto_ops = ops;
+}
+EXPORT_SYMBOL_GPL(set_hibernation_ops);
+
+static int crypto_data(const char *inbuf,
+ int inlen,
+ char *outbuf,
+ int outlen,
+ bool encrypt,
+ int page_idx)
+{
+ if (hibernation_crypto_ops &&
+ hibernation_crypto_ops->crypto_data)
+ return hibernation_crypto_ops->crypto_data(inbuf,
+ inlen, outbuf, outlen, encrypt, page_idx);
+ else
+ return -EINVAL;
+}
+
+static void crypto_save(void *outbuf)
+{
+ if (hibernation_crypto_ops &&
+ hibernation_crypto_ops->save)
+ hibernation_crypto_ops->save(outbuf);
+}
+
+static void crypto_restore(void *inbuf)
+{
+ if (hibernation_crypto_ops &&
+ hibernation_crypto_ops->restore)
+ hibernation_crypto_ops->restore(inbuf);
+}
+
+static int crypto_init(bool suspend)
+{
+ if (hibernation_crypto_ops &&
+ hibernation_crypto_ops->init)
+ return hibernation_crypto_ops->init(suspend);
+ else
+ return -EINVAL;
+}
+
static struct rb_root swsusp_extents = RB_ROOT;

static int swsusp_extents_insert(unsigned long swap_offset)
@@ -318,6 +367,10 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
swsusp_header->flags = flags;
if (flags & SF_CRC32_MODE)
swsusp_header->crc32 = handle->crc32;
+ if (handle->crypto) {
+ swsusp_header->flags |= SF_ENCRYPT_MODE;
+ crypto_save((void *)swsusp_header->salt);
+ }
error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
swsusp_resume_block, swsusp_header, NULL);
} else {
@@ -535,11 +588,12 @@ static int save_image(struct swap_map_handle *handle,
{
unsigned int m;
int ret;
- int nr_pages;
+ int nr_pages, crypto_page_idx;
int err2;
struct hib_bio_batch hb;
ktime_t start;
ktime_t stop;
+ void *tmp = NULL, *crypt_buf = NULL;

hib_init_batch(&hb);

@@ -549,12 +603,33 @@ static int save_image(struct swap_map_handle *handle,
if (!m)
m = 1;
nr_pages = 0;
+ crypto_page_idx = 0;
+ if (handle->crypto) {
+ crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
+ if (!crypt_buf)
+ return -ENOMEM;
+ }
+
start = ktime_get();
while (1) {
ret = snapshot_read_next(snapshot);
if (ret <= 0)
break;
- ret = swap_write_page(handle, data_of(*snapshot), &hb);
+ tmp = data_of(*snapshot);
+ if (handle->crypto) {
+ /* Encryption before submit_io.*/
+ ret = crypto_data(data_of(*snapshot),
+ PAGE_SIZE,
+ crypt_buf,
+ PAGE_SIZE,
+ true,
+ crypto_page_idx);
+ if (ret)
+ goto out;
+ crypto_page_idx++;
+ tmp = crypt_buf;
+ }
+ ret = swap_write_page(handle, tmp, &hb);
if (ret)
break;
if (!(nr_pages % m))
@@ -569,6 +644,9 @@ static int save_image(struct swap_map_handle *handle,
if (!ret)
pr_info("Image saving done\n");
swsusp_show_speed(start, stop, nr_to_write, "Wrote");
+ out:
+ if (crypt_buf)
+ free_page((unsigned long)crypt_buf);
return ret;
}

@@ -671,7 +749,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
{
unsigned int m;
int ret = 0;
- int nr_pages;
+ int nr_pages, crypto_page_idx;
int err2;
struct hib_bio_batch hb;
ktime_t start;
@@ -767,6 +845,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
if (!m)
m = 1;
nr_pages = 0;
+ crypto_page_idx = 0;
start = ktime_get();
for (;;) {
for (thr = 0; thr < nr_threads; thr++) {
@@ -835,7 +914,25 @@ static int save_image_lzo(struct swap_map_handle *handle,
for (off = 0;
off < LZO_HEADER + data[thr].cmp_len;
off += PAGE_SIZE) {
- memcpy(page, data[thr].cmp + off, PAGE_SIZE);
+ if (handle->crypto) {
+ /*
+ * Encrypt the compressed data
+ * before we write them to the
+ * block device.
+ */
+ ret = crypto_data(data[thr].cmp + off,
+ PAGE_SIZE,
+ page,
+ PAGE_SIZE,
+ true,
+ crypto_page_idx);
+ if (ret)
+ goto out_finish;
+ crypto_page_idx++;
+ } else {
+ memcpy(page, data[thr].cmp + off,
+ PAGE_SIZE);
+ }

ret = swap_write_page(handle, page, &hb);
if (ret)
@@ -909,6 +1006,7 @@ int swsusp_write(unsigned int flags)
int error;

pages = snapshot_get_image_size();
+ memset(&handle, 0, sizeof(struct swap_map_handle));
error = get_swap_writer(&handle);
if (error) {
pr_err("Cannot get swap writer\n");
@@ -922,6 +1020,9 @@ int swsusp_write(unsigned int flags)
}
}
memset(&snapshot, 0, sizeof(struct snapshot_handle));
+ if (!crypto_init(true))
+ /* The image needs to be encrypted. */
+ handle.crypto = true;
error = snapshot_read_next(&snapshot);
if (error < PAGE_SIZE) {
if (error >= 0)
@@ -1059,7 +1160,8 @@ static int load_image(struct swap_map_handle *handle,
ktime_t stop;
struct hib_bio_batch hb;
int err2;
- unsigned nr_pages;
+ unsigned nr_pages, crypto_page_idx;
+ void *crypt_buf = NULL;

hib_init_batch(&hb);

@@ -1069,18 +1171,42 @@ static int load_image(struct swap_map_handle *handle,
if (!m)
m = 1;
nr_pages = 0;
+ crypto_page_idx = 0;
+ if (handle->crypto) {
+ crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
+ if (!crypt_buf)
+ return -ENOMEM;
+ }
start = ktime_get();
for ( ; ; ) {
ret = snapshot_write_next(snapshot);
if (ret <= 0)
break;
- ret = swap_read_page(handle, data_of(*snapshot), &hb);
+ if (handle->crypto)
+ ret = swap_read_page(handle, crypt_buf, &hb);
+ else
+ ret = swap_read_page(handle, data_of(*snapshot), &hb);
if (ret)
break;
if (snapshot->sync_read)
ret = hib_wait_io(&hb);
if (ret)
break;
+ if (handle->crypto) {
+ /*
+ * Need a decryption for the
+ * data read from the block
+ * device.
+ */
+ ret = crypto_data(crypt_buf, PAGE_SIZE,
+ data_of(*snapshot),
+ PAGE_SIZE,
+ false,
+ crypto_page_idx);
+ if (ret)
+ break;
+ crypto_page_idx++;
+ }
if (!(nr_pages % m))
pr_info("Image loading progress: %3d%%\n",
nr_pages / m * 10);
@@ -1097,6 +1223,8 @@ static int load_image(struct swap_map_handle *handle,
ret = -ENODATA;
}
swsusp_show_speed(start, stop, nr_to_read, "Read");
+ if (crypt_buf)
+ free_page((unsigned long)crypt_buf);
return ret;
}

@@ -1164,7 +1292,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
struct hib_bio_batch hb;
ktime_t start;
ktime_t stop;
- unsigned nr_pages;
+ unsigned nr_pages, crypto_page_idx;
size_t off;
unsigned i, thr, run_threads, nr_threads;
unsigned ring = 0, pg = 0, ring_size = 0,
@@ -1173,6 +1301,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
unsigned char **page = NULL;
struct dec_data *data = NULL;
struct crc_data *crc = NULL;
+ void *first_page = NULL;

hib_init_batch(&hb);

@@ -1278,6 +1407,18 @@ static int load_image_lzo(struct swap_map_handle *handle,
}
want = ring_size = i;

+ /*
+ * The first page of data[thr] contains the length of
+ * compressed data, this page should not mess up the
+ * read buffer, so we allocate a separate page for it.
+ */
+ if (handle->crypto) {
+ first_page = (void *)get_zeroed_page(GFP_KERNEL);
+ if (!first_page) {
+ ret = -ENOMEM;
+ goto out_clean;
+ }
+ }
pr_info("Using %u thread(s) for decompression\n", nr_threads);
pr_info("Loading and decompressing image data (%u pages)...\n",
nr_to_read);
@@ -1285,6 +1426,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
if (!m)
m = 1;
nr_pages = 0;
+ crypto_page_idx = 0;
start = ktime_get();

ret = snapshot_write_next(snapshot);
@@ -1336,7 +1478,24 @@ static int load_image_lzo(struct swap_map_handle *handle,
}

for (thr = 0; have && thr < nr_threads; thr++) {
- data[thr].cmp_len = *(size_t *)page[pg];
+ if (handle->crypto) {
+ /*
+ * Need to decrypt the first page
+ * of each data[thr], which contains
+ * the compressed data length.
+ */
+ ret = crypto_data(page[pg],
+ PAGE_SIZE,
+ first_page,
+ PAGE_SIZE,
+ false,
+ crypto_page_idx);
+ if (ret)
+ goto out_finish;
+ data[thr].cmp_len = *(size_t *)first_page;
+ } else {
+ data[thr].cmp_len = *(size_t *)page[pg];
+ }
if (unlikely(!data[thr].cmp_len ||
data[thr].cmp_len >
lzo1x_worst_compress(LZO_UNC_SIZE))) {
@@ -1358,8 +1517,26 @@ static int load_image_lzo(struct swap_map_handle *handle,
for (off = 0;
off < LZO_HEADER + data[thr].cmp_len;
off += PAGE_SIZE) {
- memcpy(data[thr].cmp + off,
- page[pg], PAGE_SIZE);
+ if (handle->crypto) {
+ /*
+ * Decrypt the compressed data
+ * and leverage the decompression
+ * threads to get it done.
+ */
+ ret = crypto_data(page[pg],
+ PAGE_SIZE,
+ data[thr].cmp + off,
+ PAGE_SIZE,
+ false,
+ crypto_page_idx);
+ if (ret)
+ goto out_finish;
+ crypto_page_idx++;
+ } else {
+ memcpy(data[thr].cmp + off,
+ page[pg], PAGE_SIZE);
+
+ }
have--;
want++;
if (++pg >= ring_size)
@@ -1452,6 +1629,8 @@ static int load_image_lzo(struct swap_map_handle *handle,
out_clean:
for (i = 0; i < ring_size; i++)
free_page((unsigned long)page[i]);
+ if (first_page)
+ free_page((unsigned long)first_page);
if (crc) {
if (crc->thr)
kthread_stop(crc->thr);
@@ -1482,6 +1661,7 @@ int swsusp_read(unsigned int *flags_p)
struct swsusp_info *header;

memset(&snapshot, 0, sizeof(struct snapshot_handle));
+ memset(&handle, 0, sizeof(struct swap_map_handle));
error = snapshot_write_next(&snapshot);
if (error < PAGE_SIZE)
return error < 0 ? error : -EFAULT;
@@ -1489,6 +1669,16 @@ int swsusp_read(unsigned int *flags_p)
error = get_swap_reader(&handle, flags_p);
if (error)
goto end;
+ if (*flags_p & SF_ENCRYPT_MODE) {
+ error = crypto_init(false);
+ if (!error) {
+ /* The image has been encrypted. */
+ handle.crypto = true;
+ } else {
+ pr_err("Failed to init cipher during resume.\n");
+ goto end;
+ }
+ }
if (!error)
error = swap_read_page(&handle, header, NULL);
if (!error) {
@@ -1526,6 +1716,9 @@ int swsusp_check(void)

if (!memcmp(HIBERNATE_SIG, swsusp_header->sig, 10)) {
memcpy(swsusp_header->sig, swsusp_header->orig_sig, 10);
+ /* Read salt passed from previous kernel. */
+ if (swsusp_header->flags & SF_ENCRYPT_MODE)
+ crypto_restore((void *)&swsusp_header->salt);
/* Reset swap signature now */
error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
swsusp_resume_block,
--
2.7.4


2018-06-20 17:43:08

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH 3/3][RFC] tools: create power/crypto utility

Hi Chen,

On Wed, Jun 20, 2018 at 05:40:51PM +0800, Chen Yu wrote:
> crypto_hibernate is a user-space utility to generate
> 512bits AES key and pass it to the kernel via ioctl
> for hibernation encryption.(We can also add the key
> into kernel via keyctl if necessary, but currently
> using ioctl seems to be more straightforward as we
> need both the key and salt transit).
>
> The key derivation is based on a simplified implementation
> of PBKDF2[1] in e2fsprogs - both the key length and the hash
> bytes are the same - 512bits. crypto_hibernate will firstly
> probe the user for passphrase and get salt from kernel, then
> uses them to generate a 512bits AES key based on PBKDF2.

What is a "512bits AES key"? Do you mean AES-256-XTS (which takes a 512-bit
key, which the XTS mode internally splits into two keys)? Do you allow for
other algorithms, or is it hardcoded to AES-256-XTS? What if someone wants to
use a different algorithm?

BTW, it's difficult to review this with only patch 3/3 Cc'ed to me, as there is
no context about the problem you are trying to solve and what your actual
proposed kernel changes are. I suggest Cc'ing linux-crypto on all 3 patches.

A few more comments below, from a very brief reading of the code:

[...]
> +
> +#define PBKDF2_ITERATIONS 0xFFFF
> +#define SHA512_BLOCKSIZE 128
> +#define SHA512_LENGTH 64
> +#define SALT_BYTES 16
> +#define SYM_KEY_BYTES SHA512_LENGTH
> +#define TOTAL_USER_INFO_LEN (SALT_BYTES+SYM_KEY_BYTES)
> +#define MAX_PASSPHRASE_SIZE 1024
> +
> +struct hibernation_crypto_keys {
> + char derived_key[SYM_KEY_BYTES];
> + char salt[SALT_BYTES];
> + bool valid;
> +};
> +
> +struct hibernation_crypto_keys hib_keys;
> +
> +static char *get_key_ptr(void)
> +{
> + return hib_keys.derived_key;
> +}
> +
> +static char *get_salt_ptr(void)
> +{
> + return hib_keys.salt;
> +}
[...]
> +
> +
> +#define HIBERNATE_SALT_READ _IOW('C', 3, struct hibernation_crypto_keys)
> +#define HIBERNATE_KEY_WRITE _IOW('C', 4, struct hibernation_crypto_keys)

Why are the ioctl numbers defined based on the size of 'struct
hibernation_crypto_keys'? It's not a UAPI structure, right?

> +
> +static void get_passphrase(char *passphrase, int len)
> +{
> + char *p;
> + struct termios current_settings;
> +
> + assert(len > 0);
> + disable_echo(&current_settings);
> + p = fgets(passphrase, len, stdin);
> + tcsetattr(0, TCSANOW, &current_settings);
> + printf("\n");
> + if (!p) {
> + printf("Aborting.\n");
> + exit(1);
> + }
> + p = strrchr(passphrase, '\n');
> + if (!p)
> + p = passphrase + len - 1;
> + *p = '\0';
> +}
> +
> +#define CRYPTO_FILE "/dev/crypto_hibernate"
> +
> +static int write_keys(void)
> +{
> + int fd;
> +
> + fd = open(CRYPTO_FILE, O_RDWR);
> + if (fd < 0) {
> + printf("Cannot open device file...\n");
> + return -EINVAL;
> + }
> + ioctl(fd, HIBERNATE_KEY_WRITE, get_key_ptr());
> + return 0;

No error checking on the ioctl?

Also, how is the kernel supposed to know how long the key is, and which
algorithm it's supposed to be used for? I assume it's hardcoded to AES-256-XTS?
What if someone wants to use a different algorithm?

> +}
> +
> +static int read_salt(void)
> +{
> + int fd;
> +
> + fd = open(CRYPTO_FILE, O_RDWR);
> + if (fd < 0) {
> + printf("Cannot open device file...\n");
> + return -EINVAL;
> + }
> + ioctl(fd, HIBERNATE_SALT_READ, get_salt_ptr());
> + return 0;
> +}

No error checking on the ioctl?

> +int main(int argc, char *argv[])
> +{
> + int opt, option_index = 0;
> + char in_passphrase[MAX_PASSPHRASE_SIZE];
> +
> + while ((opt = getopt_long_only(argc, argv, "+p:s:h",
> + NULL, &option_index)) != -1) {
> + switch (opt) {
> + case 'p':
> + {
> + char *p = optarg;
> +
> + if (strlen(p) >= (MAX_PASSPHRASE_SIZE - 1)) {
> + printf("Please provide passphrase less than %d bytes.\n",
> + MAX_PASSPHRASE_SIZE);
> + exit(1);
> + }
> + strcpy(in_passphrase, p);

I haven't read this super closely, but this really looks like an off-by-one
error. It seems you intended MAX_PASSPHRASE_SIZE to include a null terminator,
so the correct check would be 'strlen(p) >= MAX_PASSPHRASE_SIZE'.

> + }
> + break;
> + case 's':
> + {
> + char *p = optarg;
> +
> + if (strlen(p) != (SALT_BYTES - 1)) {
> + printf("Please provide salt with len less than %d bytes.\n",
> + SALT_BYTES);
> + exit(1);
> + }
> + strcpy(get_salt_ptr(), p);
> + }
> + break;

Salts don't need to be human-readable. How about making the salt binary? So, a
salt specified on the command-line would be hex.

Eric

2018-06-21 08:54:35

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 0/3][RFC] Introduce the in-kernel hibernation encryption

Hi!

> As security becomes more and more important, we add the in-kernel
> encryption support for hibernation.
...
> There was a discussion on the mailing list on whether this key should
> be derived in kernel or in user space. And it turns out to be generating
> the key by user space is more acceptable[1]. So this patch set is divided
> into two parts:
> 1. The hibernation snapshot encryption in kernel space,
> 2. the key derivation implementation in user space.

uswsusp was created so that this kind of stuff could be kept in
userspace. You get graphical progress bar (etc) too. As you already
have userspace component for key derivation, I see no advantages to
uswsusp.

If you have some, please explain.

Best regards,
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (902.00 B)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-06-21 09:03:04

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 3/3][RFC] tools: create power/crypto utility



On Wed 2018-06-20 17:40:51, Chen Yu wrote:
> crypto_hibernate is a user-space utility to generate
> 512bits AES key and pass it to the kernel via ioctl
> for hibernation encryption.(We can also add the key
> into kernel via keyctl if necessary, but currently
> using ioctl seems to be more straightforward as we
> need both the key and salt transit).

Ah, here you go:

https://en.wikipedia.org/wiki/Uswsusp

It can already do encryption. If you see problems there, patches will
be gladly accepted to fix them.

If you want to maintain uswsusp in kernel tree, I guess that would be
an option, too.

If you want to just help swsusp, I have some patches that reduce
duplicities between x86-32 and x86-64. It would be cool to clean them
up and get them merged.

> Suggested-by: "Theodore Ts'o" <[email protected]>

Best regards,
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (0.98 kB)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-06-21 12:09:34

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 0/3][RFC] Introduce the in-kernel hibernation encryption

On Thu, Jun 21, 2018 at 10:53 AM, Pavel Machek <[email protected]> wrote:
> Hi!
>
>> As security becomes more and more important, we add the in-kernel
>> encryption support for hibernation.
> ...
>> There was a discussion on the mailing list on whether this key should
>> be derived in kernel or in user space. And it turns out to be generating
>> the key by user space is more acceptable[1]. So this patch set is divided
>> into two parts:
>> 1. The hibernation snapshot encryption in kernel space,
>> 2. the key derivation implementation in user space.
>
> uswsusp was created so that this kind of stuff could be kept in
> userspace. You get graphical progress bar (etc) too. As you already
> have userspace component for key derivation, I see no advantages to
> uswsusp.
>
> If you have some, please explain.

Not having to transfer plain text kernel memory to user space is one IMO.

Besides, the user space part of what you are calling uswsusp has not
been actively maintained for years now and honestly I don't know how
many users of it there are.

2018-06-21 12:11:29

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 3/3][RFC] tools: create power/crypto utility

On Thu, Jun 21, 2018 at 11:01 AM, Pavel Machek <[email protected]> wrote:
>
>
> On Wed 2018-06-20 17:40:51, Chen Yu wrote:
>> crypto_hibernate is a user-space utility to generate
>> 512bits AES key and pass it to the kernel via ioctl
>> for hibernation encryption.(We can also add the key
>> into kernel via keyctl if necessary, but currently
>> using ioctl seems to be more straightforward as we
>> need both the key and salt transit).
>
> Ah, here you go:
>
> https://en.wikipedia.org/wiki/Uswsusp

No, not really.

> It can already do encryption. If you see problems there, patches will
> be gladly accepted to fix them.

By whom?

> If you want to maintain uswsusp in kernel tree, I guess that would be
> an option, too.

No, it isn't an option due to the dependencies on crypto libraries.

> If you want to just help swsusp, I have some patches that reduce
> duplicities between x86-32 and x86-64. It would be cool to clean them
> up and get them merged.

Sure, please submit them then.

Thanks,
Rafael

2018-06-21 19:05:47

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 3/3][RFC] tools: create power/crypto utility

On Thu 2018-06-21 14:10:06, Rafael J. Wysocki wrote:
> On Thu, Jun 21, 2018 at 11:01 AM, Pavel Machek <[email protected]> wrote:
> >
> >
> > On Wed 2018-06-20 17:40:51, Chen Yu wrote:
> >> crypto_hibernate is a user-space utility to generate
> >> 512bits AES key and pass it to the kernel via ioctl
> >> for hibernation encryption.(We can also add the key
> >> into kernel via keyctl if necessary, but currently
> >> using ioctl seems to be more straightforward as we
> >> need both the key and salt transit).
> >
> > Ah, here you go:
> >
> > https://en.wikipedia.org/wiki/Uswsusp
>
> No, not really.
>
> > It can already do encryption. If you see problems there, patches will
> > be gladly accepted to fix them.
>
> By whom?

By me.

> > If you want to maintain uswsusp in kernel tree, I guess that would be
> > an option, too.
>
> No, it isn't an option due to the dependencies on crypto libraries.

Why not?

Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (1.07 kB)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-06-21 19:16:26

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 0/3][RFC] Introduce the in-kernel hibernation encryption

On Thu 2018-06-21 14:08:40, Rafael J. Wysocki wrote:
> On Thu, Jun 21, 2018 at 10:53 AM, Pavel Machek <[email protected]> wrote:
> > Hi!
> >
> >> As security becomes more and more important, we add the in-kernel
> >> encryption support for hibernation.
> > ...
> >> There was a discussion on the mailing list on whether this key should
> >> be derived in kernel or in user space. And it turns out to be generating
> >> the key by user space is more acceptable[1]. So this patch set is divided
> >> into two parts:
> >> 1. The hibernation snapshot encryption in kernel space,
> >> 2. the key derivation implementation in user space.
> >
> > uswsusp was created so that this kind of stuff could be kept in
> > userspace. You get graphical progress bar (etc) too. As you already
> > have userspace component for key derivation, I see no advantages to
> > uswsusp.
> >
> > If you have some, please explain.
>
> Not having to transfer plain text kernel memory to user space is one
> IMO.

Well, AFAICT in this case userland has the key and encrypted data are
on disk. That does not seem to be improvement.

> Besides, the user space part of what you are calling uswsusp has not
> been actively maintained for years now and honestly I don't know how
> many users of it there are.

I'd assume distros want progress bars so they still use it?

Anyway, there's solution for encrypted hibernation. If Intel wants to
invent different solution for that, and put it into kernel, they
should explain what the advantages are, relative to existing solution.

Best regards,

Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (1.71 kB)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-06-22 02:11:09

by Chen Yu

[permalink] [raw]
Subject: Re: [PATCH 0/3][RFC] Introduce the in-kernel hibernation encryption

Hi,
On Thu, Jun 21, 2018 at 09:14:43PM +0200, Pavel Machek wrote:
> On Thu 2018-06-21 14:08:40, Rafael J. Wysocki wrote:
> > On Thu, Jun 21, 2018 at 10:53 AM, Pavel Machek <[email protected]> wrote:
> > > Hi!
> > >
> > >> As security becomes more and more important, we add the in-kernel
> > >> encryption support for hibernation.
> > > ...
> > >> There was a discussion on the mailing list on whether this key should
> > >> be derived in kernel or in user space. And it turns out to be generating
> > >> the key by user space is more acceptable[1]. So this patch set is divided
> > >> into two parts:
> > >> 1. The hibernation snapshot encryption in kernel space,
> > >> 2. the key derivation implementation in user space.
> > >
> > > uswsusp was created so that this kind of stuff could be kept in
> > > userspace. You get graphical progress bar (etc) too. As you already
> > > have userspace component for key derivation, I see no advantages to
> > > uswsusp.
> > >
> > > If you have some, please explain.
> >
> > Not having to transfer plain text kernel memory to user space is one
> > IMO.
>
> Well, AFAICT in this case userland has the key and encrypted data are
> on disk. That does not seem to be improvement.
>
uswsusp needs to read the snapshot from kernel first, while
do encryption in kernel directly would reduce the IO. Besides,
the kernel memory content is protect from been read from
user space from first place, although finally they are
encrypted on the disk.

Best,
Yu


2018-06-22 02:34:33

by Chen Yu

[permalink] [raw]
Subject: Re: [PATCH 3/3][RFC] tools: create power/crypto utility

Hi Eric,
On Wed, Jun 20, 2018 at 10:41:42AM -0700, Eric Biggers wrote:
> Hi Chen,
>
> On Wed, Jun 20, 2018 at 05:40:51PM +0800, Chen Yu wrote:
> > crypto_hibernate is a user-space utility to generate
> > 512bits AES key and pass it to the kernel via ioctl
> > for hibernation encryption.(We can also add the key
> > into kernel via keyctl if necessary, but currently
> > using ioctl seems to be more straightforward as we
> > need both the key and salt transit).
> >
> > The key derivation is based on a simplified implementation
> > of PBKDF2[1] in e2fsprogs - both the key length and the hash
> > bytes are the same - 512bits. crypto_hibernate will firstly
> > probe the user for passphrase and get salt from kernel, then
> > uses them to generate a 512bits AES key based on PBKDF2.
Thanks for reviewing this.
>
> What is a "512bits AES key"? Do you mean AES-256-XTS (which takes a 512-bit
> key, which the XTS mode internally splits into two keys)?
Yes, it is AES-256-XTS.
> Do you allow for
> other algorithms, or is it hardcoded to AES-256-XTS?
Currently it is hardcoded to AES-256-XTS. It is copied from implementation
of PBKDF2 in e2fsprogs, which is hardcoded to useAES-256-XTS for ext4 encryption
in the kernel(pbkdf2_sha512() in e2fsprogs)
> What if someone wants to
> use a different algorithm?
>
If user wants to use a different algorithm, then I think we need to
port the code from openssl, which is the full implementation of PBKDF2
for:
https://www.ietf.org/rfc/rfc2898.txt
> BTW, it's difficult to review this with only patch 3/3 Cc'ed to me, as there is
> no context about the problem you are trying to solve and what your actual
> proposed kernel changes are. I suggest Cc'ing linux-crypto on all 3 patches.
>
Ok, I'll send a refreshed version.
> A few more comments below, from a very brief reading of the code:
>
> [...]
> > +
> > +#define PBKDF2_ITERATIONS 0xFFFF
> > +#define SHA512_BLOCKSIZE 128
> > +#define SHA512_LENGTH 64
> > +#define SALT_BYTES 16
> > +#define SYM_KEY_BYTES SHA512_LENGTH
> > +#define TOTAL_USER_INFO_LEN (SALT_BYTES+SYM_KEY_BYTES)
> > +#define MAX_PASSPHRASE_SIZE 1024
> > +
> > +struct hibernation_crypto_keys {
> > + char derived_key[SYM_KEY_BYTES];
> > + char salt[SALT_BYTES];
> > + bool valid;
> > +};
> > +
> > +struct hibernation_crypto_keys hib_keys;
> > +
> > +static char *get_key_ptr(void)
> > +{
> > + return hib_keys.derived_key;
> > +}
> > +
> > +static char *get_salt_ptr(void)
> > +{
> > + return hib_keys.salt;
> > +}
> [...]
> > +
> > +
> > +#define HIBERNATE_SALT_READ _IOW('C', 3, struct hibernation_crypto_keys)
> > +#define HIBERNATE_KEY_WRITE _IOW('C', 4, struct hibernation_crypto_keys)
>
> Why are the ioctl numbers defined based on the size of 'struct
> hibernation_crypto_keys'? It's not a UAPI structure, right?
>
It's not a UAPI structure, and it is defined both in user space tool
and in kernel. Do you mean, I should put the defination of this
structure under include/uapi ?
> > +
> > +static void get_passphrase(char *passphrase, int len)
> > +{
> > + char *p;
> > + struct termios current_settings;
> > +
> > + assert(len > 0);
> > + disable_echo(&current_settings);
> > + p = fgets(passphrase, len, stdin);
> > + tcsetattr(0, TCSANOW, &current_settings);
> > + printf("\n");
> > + if (!p) {
> > + printf("Aborting.\n");
> > + exit(1);
> > + }
> > + p = strrchr(passphrase, '\n');
> > + if (!p)
> > + p = passphrase + len - 1;
> > + *p = '\0';
> > +}
> > +
> > +#define CRYPTO_FILE "/dev/crypto_hibernate"
> > +
> > +static int write_keys(void)
> > +{
> > + int fd;
> > +
> > + fd = open(CRYPTO_FILE, O_RDWR);
> > + if (fd < 0) {
> > + printf("Cannot open device file...\n");
> > + return -EINVAL;
> > + }
> > + ioctl(fd, HIBERNATE_KEY_WRITE, get_key_ptr());
> > + return 0;
>
> No error checking on the ioctl?
>
Ok, will add error checking for it.
> Also, how is the kernel supposed to know how long the key is, and which
> algorithm it's supposed to be used for? I assume it's hardcoded to AES-256-XTS?
> What if someone wants to use a different algorithm?
>
Yes, the length in both user space and kernel are hardcoded to AES-256-XTS.
I can add the support for other algorithm, but might need to port from
openssl first.
> > +}
> > +
> > +static int read_salt(void)
> > +{
> > + int fd;
> > +
> > + fd = open(CRYPTO_FILE, O_RDWR);
> > + if (fd < 0) {
> > + printf("Cannot open device file...\n");
> > + return -EINVAL;
> > + }
> > + ioctl(fd, HIBERNATE_SALT_READ, get_salt_ptr());
> > + return 0;
> > +}
>
> No error checking on the ioctl?
>
Ok, will add checkings.
> > +int main(int argc, char *argv[])
> > +{
> > + int opt, option_index = 0;
> > + char in_passphrase[MAX_PASSPHRASE_SIZE];
> > +
> > + while ((opt = getopt_long_only(argc, argv, "+p:s:h",
> > + NULL, &option_index)) != -1) {
> > + switch (opt) {
> > + case 'p':
> > + {
> > + char *p = optarg;
> > +
> > + if (strlen(p) >= (MAX_PASSPHRASE_SIZE - 1)) {
> > + printf("Please provide passphrase less than %d bytes.\n",
> > + MAX_PASSPHRASE_SIZE);
> > + exit(1);
> > + }
> > + strcpy(in_passphrase, p);
>
> I haven't read this super closely, but this really looks like an off-by-one
> error. It seems you intended MAX_PASSPHRASE_SIZE to include a null terminator,
> so the correct check would be 'strlen(p) >= MAX_PASSPHRASE_SIZE'.
>
Ah, right, will change it.
> > + }
> > + break;
> > + case 's':
> > + {
> > + char *p = optarg;
> > +
> > + if (strlen(p) != (SALT_BYTES - 1)) {
> > + printf("Please provide salt with len less than %d bytes.\n",
> > + SALT_BYTES);
> > + exit(1);
> > + }
> > + strcpy(get_salt_ptr(), p);
> > + }
> > + break;
>
> Salts don't need to be human-readable. How about making the salt binary? So, a
> salt specified on the command-line would be hex.
>
Ok, I will change it to hex form.
Best,
Yu
> Eric

2018-06-22 03:01:39

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH 3/3][RFC] tools: create power/crypto utility

Hi Yu,

On Fri, Jun 22, 2018 at 10:39:13AM +0800, Yu Chen wrote:
> Hi Eric,
> On Wed, Jun 20, 2018 at 10:41:42AM -0700, Eric Biggers wrote:
> > Hi Chen,
> >
> > On Wed, Jun 20, 2018 at 05:40:51PM +0800, Chen Yu wrote:
> > > crypto_hibernate is a user-space utility to generate
> > > 512bits AES key and pass it to the kernel via ioctl
> > > for hibernation encryption.(We can also add the key
> > > into kernel via keyctl if necessary, but currently
> > > using ioctl seems to be more straightforward as we
> > > need both the key and salt transit).
> > >
> > > The key derivation is based on a simplified implementation
> > > of PBKDF2[1] in e2fsprogs - both the key length and the hash
> > > bytes are the same - 512bits. crypto_hibernate will firstly
> > > probe the user for passphrase and get salt from kernel, then
> > > uses them to generate a 512bits AES key based on PBKDF2.
> Thanks for reviewing this.
> >
> > What is a "512bits AES key"? Do you mean AES-256-XTS (which takes a 512-bit
> > key, which the XTS mode internally splits into two keys)?
> Yes, it is AES-256-XTS.
> > Do you allow for
> > other algorithms, or is it hardcoded to AES-256-XTS?
> Currently it is hardcoded to AES-256-XTS. It is copied from implementation
> of PBKDF2 in e2fsprogs, which is hardcoded to useAES-256-XTS for ext4 encryption
> in the kernel(pbkdf2_sha512() in e2fsprogs)
>
> > What if someone wants to
> > use a different algorithm?
> >
> If user wants to use a different algorithm, then I think we need to
> port the code from openssl, which is the full implementation of PBKDF2
> for:
> https://www.ietf.org/rfc/rfc2898.txt

You seem to be confusing the key derivation function (KDF) with the encryption
algorithm the derived key is used for. The purpose of a KDF is to derive key
material. The resulting key material can be used for any encryption algorithm,
provided that you derive the needed amount of key material. Note that different
encryption algorithms can use the same length key, e.g. SERPENT-256-XTS and
AES-256-XTS are both examples of algorithms that use a 64 byte (512-bit) key.
So your design probably should provide a way for an *algorithm* to be selected,
not just a key length.

> > BTW, it's difficult to review this with only patch 3/3 Cc'ed to me, as there is
> > no context about the problem you are trying to solve and what your actual
> > proposed kernel changes are. I suggest Cc'ing linux-crypto on all 3 patches.
> >
> Ok, I'll send a refreshed version.
> > A few more comments below, from a very brief reading of the code:
> >
> > [...]
> > > +
> > > +#define PBKDF2_ITERATIONS 0xFFFF
> > > +#define SHA512_BLOCKSIZE 128
> > > +#define SHA512_LENGTH 64
> > > +#define SALT_BYTES 16
> > > +#define SYM_KEY_BYTES SHA512_LENGTH
> > > +#define TOTAL_USER_INFO_LEN (SALT_BYTES+SYM_KEY_BYTES)
> > > +#define MAX_PASSPHRASE_SIZE 1024
> > > +
> > > +struct hibernation_crypto_keys {
> > > + char derived_key[SYM_KEY_BYTES];
> > > + char salt[SALT_BYTES];
> > > + bool valid;
> > > +};
> > > +
> > > +struct hibernation_crypto_keys hib_keys;
> > > +
> > > +static char *get_key_ptr(void)
> > > +{
> > > + return hib_keys.derived_key;
> > > +}
> > > +
> > > +static char *get_salt_ptr(void)
> > > +{
> > > + return hib_keys.salt;
> > > +}
> > [...]
> > > +
> > > +
> > > +#define HIBERNATE_SALT_READ _IOW('C', 3, struct hibernation_crypto_keys)
> > > +#define HIBERNATE_KEY_WRITE _IOW('C', 4, struct hibernation_crypto_keys)
> >
> > Why are the ioctl numbers defined based on the size of 'struct
> > hibernation_crypto_keys'? It's not a UAPI structure, right?
> >
> It's not a UAPI structure, and it is defined both in user space tool
> and in kernel. Do you mean, I should put the defination of this
> structure under include/uapi ?

No, I'm saying that you shouldn't define ioctl numbers (permanent ABI) based on
the size of some random structure that is subject to change.

> > > +
> > > +static void get_passphrase(char *passphrase, int len)
> > > +{
> > > + char *p;
> > > + struct termios current_settings;
> > > +
> > > + assert(len > 0);
> > > + disable_echo(&current_settings);
> > > + p = fgets(passphrase, len, stdin);
> > > + tcsetattr(0, TCSANOW, &current_settings);
> > > + printf("\n");
> > > + if (!p) {
> > > + printf("Aborting.\n");
> > > + exit(1);
> > > + }
> > > + p = strrchr(passphrase, '\n');
> > > + if (!p)
> > > + p = passphrase + len - 1;
> > > + *p = '\0';
> > > +}
> > > +
> > > +#define CRYPTO_FILE "/dev/crypto_hibernate"
> > > +
> > > +static int write_keys(void)
> > > +{
> > > + int fd;
> > > +
> > > + fd = open(CRYPTO_FILE, O_RDWR);
> > > + if (fd < 0) {
> > > + printf("Cannot open device file...\n");
> > > + return -EINVAL;
> > > + }
> > > + ioctl(fd, HIBERNATE_KEY_WRITE, get_key_ptr());
> > > + return 0;
> >
> > No error checking on the ioctl?
> >
> Ok, will add error checking for it.
> > Also, how is the kernel supposed to know how long the key is, and which
> > algorithm it's supposed to be used for? I assume it's hardcoded to AES-256-XTS?
> > What if someone wants to use a different algorithm?
> >
> Yes, the length in both user space and kernel are hardcoded to AES-256-XTS.
> I can add the support for other algorithm, but might need to port from
> openssl first.

The kernel crypto API already supports many other ciphers. It's the kernel that
actually does the encryption, right?

Again, you should Cc the whole patchset to linux-crypto so that people can
review it context, including your new kernel code that actually does the
encryption, and see what problem you are actually trying to solve.

Thanks!

- Eric

2018-06-25 07:08:26

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 3/3][RFC] tools: create power/crypto utility

On Thu, Jun 21, 2018 at 9:04 PM, Pavel Machek <[email protected]> wrote:
> On Thu 2018-06-21 14:10:06, Rafael J. Wysocki wrote:
>> On Thu, Jun 21, 2018 at 11:01 AM, Pavel Machek <[email protected]> wrote:
>> >
>> >
>> > On Wed 2018-06-20 17:40:51, Chen Yu wrote:
>> >> crypto_hibernate is a user-space utility to generate
>> >> 512bits AES key and pass it to the kernel via ioctl
>> >> for hibernation encryption.(We can also add the key
>> >> into kernel via keyctl if necessary, but currently
>> >> using ioctl seems to be more straightforward as we
>> >> need both the key and salt transit).
>> >
>> > Ah, here you go:
>> >
>> > https://en.wikipedia.org/wiki/Uswsusp
>>
>> No, not really.
>>
>> > It can already do encryption. If you see problems there, patches will
>> > be gladly accepted to fix them.
>>
>> By whom?
>
> By me.

Well, OK

Where's the repository the patches will go to when you accept them?

>> > If you want to maintain uswsusp in kernel tree, I guess that would be
>> > an option, too.
>>
>> No, it isn't an option due to the dependencies on crypto libraries.
>
> Why not?

You need a specific set of devel libraries for the utilities to build
and that need not be present on all systems I suppose?

Besides, s2disk is currently tangled with s2ram which is obsolete, so
I'd rather not.

2018-06-25 07:17:13

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 0/3][RFC] Introduce the in-kernel hibernation encryption

On Thu, Jun 21, 2018 at 9:14 PM, Pavel Machek <[email protected]> wrote:
> On Thu 2018-06-21 14:08:40, Rafael J. Wysocki wrote:
>> On Thu, Jun 21, 2018 at 10:53 AM, Pavel Machek <[email protected]> wrote:
>> > Hi!
>> >
>> >> As security becomes more and more important, we add the in-kernel
>> >> encryption support for hibernation.
>> > ...
>> >> There was a discussion on the mailing list on whether this key should
>> >> be derived in kernel or in user space. And it turns out to be generating
>> >> the key by user space is more acceptable[1]. So this patch set is divided
>> >> into two parts:
>> >> 1. The hibernation snapshot encryption in kernel space,
>> >> 2. the key derivation implementation in user space.
>> >
>> > uswsusp was created so that this kind of stuff could be kept in
>> > userspace. You get graphical progress bar (etc) too. As you already
>> > have userspace component for key derivation, I see no advantages to
>> > uswsusp.
>> >
>> > If you have some, please explain.
>>
>> Not having to transfer plain text kernel memory to user space is one
>> IMO.
>
> Well, AFAICT in this case userland has the key and encrypted data are
> on disk. That does not seem to be improvement.

Not really.

With the encryption in the kernel, if the kernel is careful enough,
use space will not be able to read the image even if it knows the
passphrase, unless it can also add itself to the initramfs image
loaded by the restore kernel, which (at least) can be made way more
difficult than simply reading the plain-text image data via an I/F
readily provided by the kernel.

>> Besides, the user space part of what you are calling uswsusp has not
>> been actively maintained for years now and honestly I don't know how
>> many users of it there are.
>
> I'd assume distros want progress bars so they still use it?

I'd rather not speak for distros, but if hibernation images are
written to fast storage, progress bars are not that useful any more.
They are not used on Windows any more, for one.

> Anyway, there's solution for encrypted hibernation.

Which is suboptimal and you know it.

> If Intel wants to invent different solution for that, and put it into kernel, they
> should explain what the advantages are, relative to existing solution.

I'm not sure what "they" is supposed to mean here, but the advantages
are quite clear to me: better security and reduced syscall overhead.

2018-06-25 11:55:25

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 3/3][RFC] tools: create power/crypto utility

Hi!

> >> > It can already do encryption. If you see problems there, patches will
> >> > be gladly accepted to fix them.
> >>
> >> By whom?
> >
> > By me.
>
> Well, OK
>
> Where's the repository the patches will go to when you accept them?

On sourceforge.

> >> > If you want to maintain uswsusp in kernel tree, I guess that would be
> >> > an option, too.
> >>
> >> No, it isn't an option due to the dependencies on crypto libraries.
> >
> > Why not?
>
> You need a specific set of devel libraries for the utilities to build
> and that need not be present on all systems I suppose?
>
> Besides, s2disk is currently tangled with s2ram which is obsolete, so
> I'd rather not.

Its not tangled.

--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (868.00 B)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-06-25 11:55:57

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 0/3][RFC] Introduce the in-kernel hibernation encryption

On Fri 2018-06-22 10:14:10, Yu Chen wrote:
> Hi,
> On Thu, Jun 21, 2018 at 09:14:43PM +0200, Pavel Machek wrote:
> > On Thu 2018-06-21 14:08:40, Rafael J. Wysocki wrote:
> > > On Thu, Jun 21, 2018 at 10:53 AM, Pavel Machek <[email protected]> wrote:
> > > > Hi!
> > > >
> > > >> As security becomes more and more important, we add the in-kernel
> > > >> encryption support for hibernation.
> > > > ...
> > > >> There was a discussion on the mailing list on whether this key should
> > > >> be derived in kernel or in user space. And it turns out to be generating
> > > >> the key by user space is more acceptable[1]. So this patch set is divided
> > > >> into two parts:
> > > >> 1. The hibernation snapshot encryption in kernel space,
> > > >> 2. the key derivation implementation in user space.
> > > >
> > > > uswsusp was created so that this kind of stuff could be kept in
> > > > userspace. You get graphical progress bar (etc) too. As you already
> > > > have userspace component for key derivation, I see no advantages to
> > > > uswsusp.
> > > >
> > > > If you have some, please explain.
> > >
> > > Not having to transfer plain text kernel memory to user space is one
> > > IMO.
> >
> > Well, AFAICT in this case userland has the key and encrypted data are
> > on disk. That does not seem to be improvement.
> >
> uswsusp needs to read the snapshot from kernel first, while
> do encryption in kernel directly would reduce the IO. Besides,
> the kernel memory content is protect from been read from
> user space from first place, although finally they are
> encrypted on the disk.

If you believe you solution is faster, please benchmark it. I don't
believe it will be.

Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (1.83 kB)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-06-25 12:00:08

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 0/3][RFC] Introduce the in-kernel hibernation encryption



> > Well, AFAICT in this case userland has the key and encrypted data are
> > on disk. That does not seem to be improvement.
>
> Not really.
>
> With the encryption in the kernel, if the kernel is careful enough,
> use space will not be able to read the image even if it knows the
> passphrase, unless it can also add itself to the initramfs image
> loaded by the restore kernel, which (at least) can be made way more
> difficult than simply reading the plain-text image data via an I/F
> readily provided by the kernel.

I still do not see the improvement. If you are root, you can modify
the initramfs and decrypt the data.

Please explain in the changelog how this is better than existing solution.

> >> Besides, the user space part of what you are calling uswsusp has not
> >> been actively maintained for years now and honestly I don't know how
> >> many users of it there are.
> >
> > I'd assume distros want progress bars so they still use it?
>
> I'd rather not speak for distros, but if hibernation images are
> written to fast storage, progress bars are not that useful any more.
> They are not used on Windows any more, for one.
>
> > Anyway, there's solution for encrypted hibernation.
>
> Which is suboptimal and you know it.

If this is better, please explain how in the changelog.

> > If Intel wants to invent different solution for that, and put it into kernel, they
> > should explain what the advantages are, relative to existing solution.
>
> I'm not sure what "they" is supposed to mean here, but the advantages
> are quite clear to me: better security and reduced syscall overhead.

Better security against which attack?

Syscall overhead is not a problem for hibernation, and you know it.

Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (1.89 kB)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-06-25 21:58:34

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 3/3][RFC] tools: create power/crypto utility

On Mon, Jun 25, 2018 at 1:54 PM, Pavel Machek <[email protected]> wrote:
> Hi!
>
>> >> > It can already do encryption. If you see problems there, patches will
>> >> > be gladly accepted to fix them.
>> >>
>> >> By whom?
>> >
>> > By me.
>>
>> Well, OK
>>
>> Where's the repository the patches will go to when you accept them?
>
> On sourceforge.

No, it's not there or at least it's outdated AFAICS.

>> >> > If you want to maintain uswsusp in kernel tree, I guess that would be
>> >> > an option, too.
>> >>
>> >> No, it isn't an option due to the dependencies on crypto libraries.
>> >
>> > Why not?
>>
>> You need a specific set of devel libraries for the utilities to build
>> and that need not be present on all systems I suppose?
>>
>> Besides, s2disk is currently tangled with s2ram which is obsolete, so
>> I'd rather not.
>
> Its not tangled.

Yes, it is.

2018-06-25 22:16:19

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 0/3][RFC] Introduce the in-kernel hibernation encryption

On Mon, Jun 25, 2018 at 1:59 PM, Pavel Machek <[email protected]> wrote:
>
>
>> > Well, AFAICT in this case userland has the key and encrypted data are
>> > on disk. That does not seem to be improvement.
>>
>> Not really.
>>
>> With the encryption in the kernel, if the kernel is careful enough,
>> use space will not be able to read the image even if it knows the
>> passphrase, unless it can also add itself to the initramfs image
>> loaded by the restore kernel, which (at least) can be made way more
>> difficult than simply reading the plain-text image data via an I/F
>> readily provided by the kernel.
>
> I still do not see the improvement. If you are root, you can modify
> the initramfs and decrypt the data.

Even so, this requires additional effort which already makes the life
of attackers harder.

> Please explain in the changelog how this is better than existing solution.

That can be done.

>> >> Besides, the user space part of what you are calling uswsusp has not
>> >> been actively maintained for years now and honestly I don't know how
>> >> many users of it there are.
>> >
>> > I'd assume distros want progress bars so they still use it?
>>
>> I'd rather not speak for distros, but if hibernation images are
>> written to fast storage, progress bars are not that useful any more.
>> They are not used on Windows any more, for one.
>>
>> > Anyway, there's solution for encrypted hibernation.
>>
>> Which is suboptimal and you know it.
>
> If this is better, please explain how in the changelog.
>
>> > If Intel wants to invent different solution for that, and put it into kernel, they
>> > should explain what the advantages are, relative to existing solution.
>>
>> I'm not sure what "they" is supposed to mean here, but the advantages
>> are quite clear to me: better security and reduced syscall overhead.
>
> Better security against which attack?

If kernel memory is encrypted by the kernel, it doesn't have to worry
about bugs in user space utilities and similar, for example.

> Syscall overhead is not a problem for hibernation, and you know it.

Oh well.

I wrote the majority of the code you seem to be defending and I don't
really share your enthusiasm about it. It was good enough at the time
it was written, but not today and this discussion is over as far as
I'm concerned.

2018-06-25 22:18:26

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 3/3][RFC] tools: create power/crypto utility

Hi!

> >> >> > If you want to maintain uswsusp in kernel tree, I guess that would be
> >> >> > an option, too.
> >> >>
> >> >> No, it isn't an option due to the dependencies on crypto libraries.
> >> >
> >> > Why not?
> >>
> >> You need a specific set of devel libraries for the utilities to build
> >> and that need not be present on all systems I suppose?
> >>
> >> Besides, s2disk is currently tangled with s2ram which is obsolete, so
> >> I'd rather not.
> >
> > Its not tangled.
>
> Yes, it is.

I checked the Makefile, and it builds from separate sources. Yes, it
is in one repository, but s2disk does not depend on s2ram (s2both does).

Anyway... that's really besides a point.

Interested parties can easily fix up the userland parts of uswsusp,
change crypto, add or remove dependencies, move it to other hosting,
or drop it and start again. Kernel interface is flexible enough. If
Chen wants to move the s2disk encryption into kernel, it is his task
to explain why that is neccessary.

Pavel

--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (1.16 kB)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-06-26 11:13:46

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 3/3][RFC] tools: create power/crypto utility

On Tue 2018-06-26 12:30:24, Oliver Neukum wrote:
> On Di, 2018-06-26 at 00:16 +0200, Pavel Machek wrote:
> > Interested parties can easily fix up the userland parts of uswsusp,
> >
> > change crypto, add or remove dependencies, move it to other hosting,
> >
> > or drop it and start again. Kernel interface is flexible enough. If
> >
> > Chen wants to move the s2disk encryption into kernel, it is his task
> >
> > to explain why that is neccessary.
>
> We would have to assume that the kernel is on a higher level of trust.
> To a certain extent it is.You cannot drop support for /dev/kmem conceptionally
> if there is an ioctl to snapshot it.

If I understood the description, proposed patches give userspace encryption
key + image encrypted with that key. So... that's not really an
improvement.

Anyway, I guess it makes sense to wait for v2 of patches with better
description of security goals of this.

Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (1.07 kB)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-06-28 13:09:29

by joeyli

[permalink] [raw]
Subject: Re: [PATCH 2/3][RFC] PM / Hibernate: Encrypt the snapshot pages before submitted to the block device

Hi Chen Yu,

On Wed, Jun 20, 2018 at 05:40:32PM +0800, Chen Yu wrote:
> Use the helper functions introduced previously to encrypt
> the page data before they are submitted to the block device.
> Besides, for the case of hibernation compression, the data
> are firstly compressed and then encrypted, and vice versa
> for the resume process.
>

I want to suggest my solution that it direct signs/encrypts the
memory snapshot image. This solution is already shipped with
SLE12 a couple of years:

https://github.com/joeyli/linux-s4sign/commits/s4sign-hmac-encrypted-key-v0.2-v4.17-rc3

The above patches still need to clean up. I am working on some
other bugs, but I can clean up and send out it ASAP.

The advantage of this solution is that it produces a signed and
encrypted image. Not just for writing to block device by kernel,
it also can provide a signed/encrypted image to user space. User
space can store the encrypted image to anywhere.

I am OK for your user space key generator because I didn't have
similar solution yet. I am working on the EFI master key and also
want to adapt hibernation to keyring. I will continue the works.

Thanks a lot!
Joey Lee

> Suggested-by: Rafael J. Wysocki <[email protected]>
> Cc: Rafael J. Wysocki <[email protected]>
> Cc: Pavel Machek <[email protected]>
> Cc: Len Brown <[email protected]>
> Cc: Borislav Petkov <[email protected]>
> Cc: "Lee, Chun-Yi" <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Chen Yu <[email protected]>
> ---
> kernel/power/power.h | 1 +
> kernel/power/swap.c | 215 ++++++++++++++++++++++++++++++++++++++++++++++++---
> 2 files changed, 205 insertions(+), 11 deletions(-)
>
> diff --git a/kernel/power/power.h b/kernel/power/power.h
> index 660aac3..637695c 100644
> --- a/kernel/power/power.h
> +++ b/kernel/power/power.h
> @@ -207,6 +207,7 @@ extern int swsusp_swap_in_use(void);
> #define SF_PLATFORM_MODE 1
> #define SF_NOCOMPRESS_MODE 2
> #define SF_CRC32_MODE 4
> +#define SF_ENCRYPT_MODE 8
>
> /* kernel/power/hibernate.c */
> extern int swsusp_check(void);
> diff --git a/kernel/power/swap.c b/kernel/power/swap.c
> index c2bcf97..2b6b3d0 100644
> --- a/kernel/power/swap.c
> +++ b/kernel/power/swap.c
> @@ -102,14 +102,16 @@ struct swap_map_handle {
> unsigned int k;
> unsigned long reqd_free_pages;
> u32 crc32;
> + bool crypto;
> };
>
> struct swsusp_header {
> char reserved[PAGE_SIZE - 20 - sizeof(sector_t) - sizeof(int) -
> - sizeof(u32)];
> + sizeof(u32) - HIBERNATE_SALT_BYTES];
> u32 crc32;
> sector_t image;
> unsigned int flags; /* Flags to pass to the "boot" kernel */
> + char salt[HIBERNATE_SALT_BYTES];
> char orig_sig[10];
> char sig[10];
> } __packed;
> @@ -127,6 +129,53 @@ struct swsusp_extent {
> unsigned long end;
> };
>
> +/* For encryption/decryption. */
> +static struct hibernation_crypto *hibernation_crypto_ops;
> +
> +void set_hibernation_ops(struct hibernation_crypto *ops)
> +{
> + hibernation_crypto_ops = ops;
> +}
> +EXPORT_SYMBOL_GPL(set_hibernation_ops);
> +
> +static int crypto_data(const char *inbuf,
> + int inlen,
> + char *outbuf,
> + int outlen,
> + bool encrypt,
> + int page_idx)
> +{
> + if (hibernation_crypto_ops &&
> + hibernation_crypto_ops->crypto_data)
> + return hibernation_crypto_ops->crypto_data(inbuf,
> + inlen, outbuf, outlen, encrypt, page_idx);
> + else
> + return -EINVAL;
> +}
> +
> +static void crypto_save(void *outbuf)
> +{
> + if (hibernation_crypto_ops &&
> + hibernation_crypto_ops->save)
> + hibernation_crypto_ops->save(outbuf);
> +}
> +
> +static void crypto_restore(void *inbuf)
> +{
> + if (hibernation_crypto_ops &&
> + hibernation_crypto_ops->restore)
> + hibernation_crypto_ops->restore(inbuf);
> +}
> +
> +static int crypto_init(bool suspend)
> +{
> + if (hibernation_crypto_ops &&
> + hibernation_crypto_ops->init)
> + return hibernation_crypto_ops->init(suspend);
> + else
> + return -EINVAL;
> +}
> +
> static struct rb_root swsusp_extents = RB_ROOT;
>
> static int swsusp_extents_insert(unsigned long swap_offset)
> @@ -318,6 +367,10 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
> swsusp_header->flags = flags;
> if (flags & SF_CRC32_MODE)
> swsusp_header->crc32 = handle->crc32;
> + if (handle->crypto) {
> + swsusp_header->flags |= SF_ENCRYPT_MODE;
> + crypto_save((void *)swsusp_header->salt);
> + }
> error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
> swsusp_resume_block, swsusp_header, NULL);
> } else {
> @@ -535,11 +588,12 @@ static int save_image(struct swap_map_handle *handle,
> {
> unsigned int m;
> int ret;
> - int nr_pages;
> + int nr_pages, crypto_page_idx;
> int err2;
> struct hib_bio_batch hb;
> ktime_t start;
> ktime_t stop;
> + void *tmp = NULL, *crypt_buf = NULL;
>
> hib_init_batch(&hb);
>
> @@ -549,12 +603,33 @@ static int save_image(struct swap_map_handle *handle,
> if (!m)
> m = 1;
> nr_pages = 0;
> + crypto_page_idx = 0;
> + if (handle->crypto) {
> + crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
> + if (!crypt_buf)
> + return -ENOMEM;
> + }
> +
> start = ktime_get();
> while (1) {
> ret = snapshot_read_next(snapshot);
> if (ret <= 0)
> break;
> - ret = swap_write_page(handle, data_of(*snapshot), &hb);
> + tmp = data_of(*snapshot);
> + if (handle->crypto) {
> + /* Encryption before submit_io.*/
> + ret = crypto_data(data_of(*snapshot),
> + PAGE_SIZE,
> + crypt_buf,
> + PAGE_SIZE,
> + true,
> + crypto_page_idx);
> + if (ret)
> + goto out;
> + crypto_page_idx++;
> + tmp = crypt_buf;
> + }
> + ret = swap_write_page(handle, tmp, &hb);
> if (ret)
> break;
> if (!(nr_pages % m))
> @@ -569,6 +644,9 @@ static int save_image(struct swap_map_handle *handle,
> if (!ret)
> pr_info("Image saving done\n");
> swsusp_show_speed(start, stop, nr_to_write, "Wrote");
> + out:
> + if (crypt_buf)
> + free_page((unsigned long)crypt_buf);
> return ret;
> }
>
> @@ -671,7 +749,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
> {
> unsigned int m;
> int ret = 0;
> - int nr_pages;
> + int nr_pages, crypto_page_idx;
> int err2;
> struct hib_bio_batch hb;
> ktime_t start;
> @@ -767,6 +845,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
> if (!m)
> m = 1;
> nr_pages = 0;
> + crypto_page_idx = 0;
> start = ktime_get();
> for (;;) {
> for (thr = 0; thr < nr_threads; thr++) {
> @@ -835,7 +914,25 @@ static int save_image_lzo(struct swap_map_handle *handle,
> for (off = 0;
> off < LZO_HEADER + data[thr].cmp_len;
> off += PAGE_SIZE) {
> - memcpy(page, data[thr].cmp + off, PAGE_SIZE);
> + if (handle->crypto) {
> + /*
> + * Encrypt the compressed data
> + * before we write them to the
> + * block device.
> + */
> + ret = crypto_data(data[thr].cmp + off,
> + PAGE_SIZE,
> + page,
> + PAGE_SIZE,
> + true,
> + crypto_page_idx);
> + if (ret)
> + goto out_finish;
> + crypto_page_idx++;
> + } else {
> + memcpy(page, data[thr].cmp + off,
> + PAGE_SIZE);
> + }
>
> ret = swap_write_page(handle, page, &hb);
> if (ret)
> @@ -909,6 +1006,7 @@ int swsusp_write(unsigned int flags)
> int error;
>
> pages = snapshot_get_image_size();
> + memset(&handle, 0, sizeof(struct swap_map_handle));
> error = get_swap_writer(&handle);
> if (error) {
> pr_err("Cannot get swap writer\n");
> @@ -922,6 +1020,9 @@ int swsusp_write(unsigned int flags)
> }
> }
> memset(&snapshot, 0, sizeof(struct snapshot_handle));
> + if (!crypto_init(true))
> + /* The image needs to be encrypted. */
> + handle.crypto = true;
> error = snapshot_read_next(&snapshot);
> if (error < PAGE_SIZE) {
> if (error >= 0)
> @@ -1059,7 +1160,8 @@ static int load_image(struct swap_map_handle *handle,
> ktime_t stop;
> struct hib_bio_batch hb;
> int err2;
> - unsigned nr_pages;
> + unsigned nr_pages, crypto_page_idx;
> + void *crypt_buf = NULL;
>
> hib_init_batch(&hb);
>
> @@ -1069,18 +1171,42 @@ static int load_image(struct swap_map_handle *handle,
> if (!m)
> m = 1;
> nr_pages = 0;
> + crypto_page_idx = 0;
> + if (handle->crypto) {
> + crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
> + if (!crypt_buf)
> + return -ENOMEM;
> + }
> start = ktime_get();
> for ( ; ; ) {
> ret = snapshot_write_next(snapshot);
> if (ret <= 0)
> break;
> - ret = swap_read_page(handle, data_of(*snapshot), &hb);
> + if (handle->crypto)
> + ret = swap_read_page(handle, crypt_buf, &hb);
> + else
> + ret = swap_read_page(handle, data_of(*snapshot), &hb);
> if (ret)
> break;
> if (snapshot->sync_read)
> ret = hib_wait_io(&hb);
> if (ret)
> break;
> + if (handle->crypto) {
> + /*
> + * Need a decryption for the
> + * data read from the block
> + * device.
> + */
> + ret = crypto_data(crypt_buf, PAGE_SIZE,
> + data_of(*snapshot),
> + PAGE_SIZE,
> + false,
> + crypto_page_idx);
> + if (ret)
> + break;
> + crypto_page_idx++;
> + }
> if (!(nr_pages % m))
> pr_info("Image loading progress: %3d%%\n",
> nr_pages / m * 10);
> @@ -1097,6 +1223,8 @@ static int load_image(struct swap_map_handle *handle,
> ret = -ENODATA;
> }
> swsusp_show_speed(start, stop, nr_to_read, "Read");
> + if (crypt_buf)
> + free_page((unsigned long)crypt_buf);
> return ret;
> }
>
> @@ -1164,7 +1292,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
> struct hib_bio_batch hb;
> ktime_t start;
> ktime_t stop;
> - unsigned nr_pages;
> + unsigned nr_pages, crypto_page_idx;
> size_t off;
> unsigned i, thr, run_threads, nr_threads;
> unsigned ring = 0, pg = 0, ring_size = 0,
> @@ -1173,6 +1301,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
> unsigned char **page = NULL;
> struct dec_data *data = NULL;
> struct crc_data *crc = NULL;
> + void *first_page = NULL;
>
> hib_init_batch(&hb);
>
> @@ -1278,6 +1407,18 @@ static int load_image_lzo(struct swap_map_handle *handle,
> }
> want = ring_size = i;
>
> + /*
> + * The first page of data[thr] contains the length of
> + * compressed data, this page should not mess up the
> + * read buffer, so we allocate a separate page for it.
> + */
> + if (handle->crypto) {
> + first_page = (void *)get_zeroed_page(GFP_KERNEL);
> + if (!first_page) {
> + ret = -ENOMEM;
> + goto out_clean;
> + }
> + }
> pr_info("Using %u thread(s) for decompression\n", nr_threads);
> pr_info("Loading and decompressing image data (%u pages)...\n",
> nr_to_read);
> @@ -1285,6 +1426,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
> if (!m)
> m = 1;
> nr_pages = 0;
> + crypto_page_idx = 0;
> start = ktime_get();
>
> ret = snapshot_write_next(snapshot);
> @@ -1336,7 +1478,24 @@ static int load_image_lzo(struct swap_map_handle *handle,
> }
>
> for (thr = 0; have && thr < nr_threads; thr++) {
> - data[thr].cmp_len = *(size_t *)page[pg];
> + if (handle->crypto) {
> + /*
> + * Need to decrypt the first page
> + * of each data[thr], which contains
> + * the compressed data length.
> + */
> + ret = crypto_data(page[pg],
> + PAGE_SIZE,
> + first_page,
> + PAGE_SIZE,
> + false,
> + crypto_page_idx);
> + if (ret)
> + goto out_finish;
> + data[thr].cmp_len = *(size_t *)first_page;
> + } else {
> + data[thr].cmp_len = *(size_t *)page[pg];
> + }
> if (unlikely(!data[thr].cmp_len ||
> data[thr].cmp_len >
> lzo1x_worst_compress(LZO_UNC_SIZE))) {
> @@ -1358,8 +1517,26 @@ static int load_image_lzo(struct swap_map_handle *handle,
> for (off = 0;
> off < LZO_HEADER + data[thr].cmp_len;
> off += PAGE_SIZE) {
> - memcpy(data[thr].cmp + off,
> - page[pg], PAGE_SIZE);
> + if (handle->crypto) {
> + /*
> + * Decrypt the compressed data
> + * and leverage the decompression
> + * threads to get it done.
> + */
> + ret = crypto_data(page[pg],
> + PAGE_SIZE,
> + data[thr].cmp + off,
> + PAGE_SIZE,
> + false,
> + crypto_page_idx);
> + if (ret)
> + goto out_finish;
> + crypto_page_idx++;
> + } else {
> + memcpy(data[thr].cmp + off,
> + page[pg], PAGE_SIZE);
> +
> + }
> have--;
> want++;
> if (++pg >= ring_size)
> @@ -1452,6 +1629,8 @@ static int load_image_lzo(struct swap_map_handle *handle,
> out_clean:
> for (i = 0; i < ring_size; i++)
> free_page((unsigned long)page[i]);
> + if (first_page)
> + free_page((unsigned long)first_page);
> if (crc) {
> if (crc->thr)
> kthread_stop(crc->thr);
> @@ -1482,6 +1661,7 @@ int swsusp_read(unsigned int *flags_p)
> struct swsusp_info *header;
>
> memset(&snapshot, 0, sizeof(struct snapshot_handle));
> + memset(&handle, 0, sizeof(struct swap_map_handle));
> error = snapshot_write_next(&snapshot);
> if (error < PAGE_SIZE)
> return error < 0 ? error : -EFAULT;
> @@ -1489,6 +1669,16 @@ int swsusp_read(unsigned int *flags_p)
> error = get_swap_reader(&handle, flags_p);
> if (error)
> goto end;
> + if (*flags_p & SF_ENCRYPT_MODE) {
> + error = crypto_init(false);
> + if (!error) {
> + /* The image has been encrypted. */
> + handle.crypto = true;
> + } else {
> + pr_err("Failed to init cipher during resume.\n");
> + goto end;
> + }
> + }
> if (!error)
> error = swap_read_page(&handle, header, NULL);
> if (!error) {
> @@ -1526,6 +1716,9 @@ int swsusp_check(void)
>
> if (!memcmp(HIBERNATE_SIG, swsusp_header->sig, 10)) {
> memcpy(swsusp_header->sig, swsusp_header->orig_sig, 10);
> + /* Read salt passed from previous kernel. */
> + if (swsusp_header->flags & SF_ENCRYPT_MODE)
> + crypto_restore((void *)&swsusp_header->salt);
> /* Reset swap signature now */
> error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
> swsusp_resume_block,
> --
> 2.7.4
>

2018-06-28 13:45:57

by Chen Yu

[permalink] [raw]
Subject: Re: [PATCH 2/3][RFC] PM / Hibernate: Encrypt the snapshot pages before submitted to the block device

Hi,
On Thu, Jun 28, 2018 at 09:07:20PM +0800, joeyli wrote:
> Hi Chen Yu,
>
> On Wed, Jun 20, 2018 at 05:40:32PM +0800, Chen Yu wrote:
> > Use the helper functions introduced previously to encrypt
> > the page data before they are submitted to the block device.
> > Besides, for the case of hibernation compression, the data
> > are firstly compressed and then encrypted, and vice versa
> > for the resume process.
> >
>
> I want to suggest my solution that it direct signs/encrypts the
> memory snapshot image. This solution is already shipped with
> SLE12 a couple of years:
>
> https://github.com/joeyli/linux-s4sign/commits/s4sign-hmac-encrypted-key-v0.2-v4.17-rc3
>
I did not see image page encryption in above link, if I understand
correctly, your code focus on signation and encrypt the hidden data,
but not target for the whole snapshot data.
> The above patches still need to clean up. I am working on some
> other bugs, but I can clean up and send out it ASAP.
>
> The advantage of this solution is that it produces a signed and
> encrypted image. Not just for writing to block device by kernel,
> it also can provide a signed/encrypted image to user space. User
> space can store the encrypted image to anywhere.
>
> I am OK for your user space key generator because I didn't have
> similar solution yet. I am working on the EFI master key and also
> want to adapt hibernation to keyring. I will continue the works.
>
The user space tool can easily add the keyring support besides
ioctl if needed.

Best,
Yu
> Thanks a lot!
> Joey Lee
>
> > Suggested-by: Rafael J. Wysocki <[email protected]>
> > Cc: Rafael J. Wysocki <[email protected]>
> > Cc: Pavel Machek <[email protected]>
> > Cc: Len Brown <[email protected]>
> > Cc: Borislav Petkov <[email protected]>
> > Cc: "Lee, Chun-Yi" <[email protected]>
> > Cc: [email protected]
> > Cc: [email protected]
> > Signed-off-by: Chen Yu <[email protected]>
> > ---
> > kernel/power/power.h | 1 +
> > kernel/power/swap.c | 215 ++++++++++++++++++++++++++++++++++++++++++++++++---
> > 2 files changed, 205 insertions(+), 11 deletions(-)
> >
> > diff --git a/kernel/power/power.h b/kernel/power/power.h
> > index 660aac3..637695c 100644
> > --- a/kernel/power/power.h
> > +++ b/kernel/power/power.h
> > @@ -207,6 +207,7 @@ extern int swsusp_swap_in_use(void);
> > #define SF_PLATFORM_MODE 1
> > #define SF_NOCOMPRESS_MODE 2
> > #define SF_CRC32_MODE 4
> > +#define SF_ENCRYPT_MODE 8
> >
> > /* kernel/power/hibernate.c */
> > extern int swsusp_check(void);
> > diff --git a/kernel/power/swap.c b/kernel/power/swap.c
> > index c2bcf97..2b6b3d0 100644
> > --- a/kernel/power/swap.c
> > +++ b/kernel/power/swap.c
> > @@ -102,14 +102,16 @@ struct swap_map_handle {
> > unsigned int k;
> > unsigned long reqd_free_pages;
> > u32 crc32;
> > + bool crypto;
> > };
> >
> > struct swsusp_header {
> > char reserved[PAGE_SIZE - 20 - sizeof(sector_t) - sizeof(int) -
> > - sizeof(u32)];
> > + sizeof(u32) - HIBERNATE_SALT_BYTES];
> > u32 crc32;
> > sector_t image;
> > unsigned int flags; /* Flags to pass to the "boot" kernel */
> > + char salt[HIBERNATE_SALT_BYTES];
> > char orig_sig[10];
> > char sig[10];
> > } __packed;
> > @@ -127,6 +129,53 @@ struct swsusp_extent {
> > unsigned long end;
> > };
> >
> > +/* For encryption/decryption. */
> > +static struct hibernation_crypto *hibernation_crypto_ops;
> > +
> > +void set_hibernation_ops(struct hibernation_crypto *ops)
> > +{
> > + hibernation_crypto_ops = ops;
> > +}
> > +EXPORT_SYMBOL_GPL(set_hibernation_ops);
> > +
> > +static int crypto_data(const char *inbuf,
> > + int inlen,
> > + char *outbuf,
> > + int outlen,
> > + bool encrypt,
> > + int page_idx)
> > +{
> > + if (hibernation_crypto_ops &&
> > + hibernation_crypto_ops->crypto_data)
> > + return hibernation_crypto_ops->crypto_data(inbuf,
> > + inlen, outbuf, outlen, encrypt, page_idx);
> > + else
> > + return -EINVAL;
> > +}
> > +
> > +static void crypto_save(void *outbuf)
> > +{
> > + if (hibernation_crypto_ops &&
> > + hibernation_crypto_ops->save)
> > + hibernation_crypto_ops->save(outbuf);
> > +}
> > +
> > +static void crypto_restore(void *inbuf)
> > +{
> > + if (hibernation_crypto_ops &&
> > + hibernation_crypto_ops->restore)
> > + hibernation_crypto_ops->restore(inbuf);
> > +}
> > +
> > +static int crypto_init(bool suspend)
> > +{
> > + if (hibernation_crypto_ops &&
> > + hibernation_crypto_ops->init)
> > + return hibernation_crypto_ops->init(suspend);
> > + else
> > + return -EINVAL;
> > +}
> > +
> > static struct rb_root swsusp_extents = RB_ROOT;
> >
> > static int swsusp_extents_insert(unsigned long swap_offset)
> > @@ -318,6 +367,10 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
> > swsusp_header->flags = flags;
> > if (flags & SF_CRC32_MODE)
> > swsusp_header->crc32 = handle->crc32;
> > + if (handle->crypto) {
> > + swsusp_header->flags |= SF_ENCRYPT_MODE;
> > + crypto_save((void *)swsusp_header->salt);
> > + }
> > error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
> > swsusp_resume_block, swsusp_header, NULL);
> > } else {
> > @@ -535,11 +588,12 @@ static int save_image(struct swap_map_handle *handle,
> > {
> > unsigned int m;
> > int ret;
> > - int nr_pages;
> > + int nr_pages, crypto_page_idx;
> > int err2;
> > struct hib_bio_batch hb;
> > ktime_t start;
> > ktime_t stop;
> > + void *tmp = NULL, *crypt_buf = NULL;
> >
> > hib_init_batch(&hb);
> >
> > @@ -549,12 +603,33 @@ static int save_image(struct swap_map_handle *handle,
> > if (!m)
> > m = 1;
> > nr_pages = 0;
> > + crypto_page_idx = 0;
> > + if (handle->crypto) {
> > + crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
> > + if (!crypt_buf)
> > + return -ENOMEM;
> > + }
> > +
> > start = ktime_get();
> > while (1) {
> > ret = snapshot_read_next(snapshot);
> > if (ret <= 0)
> > break;
> > - ret = swap_write_page(handle, data_of(*snapshot), &hb);
> > + tmp = data_of(*snapshot);
> > + if (handle->crypto) {
> > + /* Encryption before submit_io.*/
> > + ret = crypto_data(data_of(*snapshot),
> > + PAGE_SIZE,
> > + crypt_buf,
> > + PAGE_SIZE,
> > + true,
> > + crypto_page_idx);
> > + if (ret)
> > + goto out;
> > + crypto_page_idx++;
> > + tmp = crypt_buf;
> > + }
> > + ret = swap_write_page(handle, tmp, &hb);
> > if (ret)
> > break;
> > if (!(nr_pages % m))
> > @@ -569,6 +644,9 @@ static int save_image(struct swap_map_handle *handle,
> > if (!ret)
> > pr_info("Image saving done\n");
> > swsusp_show_speed(start, stop, nr_to_write, "Wrote");
> > + out:
> > + if (crypt_buf)
> > + free_page((unsigned long)crypt_buf);
> > return ret;
> > }
> >
> > @@ -671,7 +749,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
> > {
> > unsigned int m;
> > int ret = 0;
> > - int nr_pages;
> > + int nr_pages, crypto_page_idx;
> > int err2;
> > struct hib_bio_batch hb;
> > ktime_t start;
> > @@ -767,6 +845,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
> > if (!m)
> > m = 1;
> > nr_pages = 0;
> > + crypto_page_idx = 0;
> > start = ktime_get();
> > for (;;) {
> > for (thr = 0; thr < nr_threads; thr++) {
> > @@ -835,7 +914,25 @@ static int save_image_lzo(struct swap_map_handle *handle,
> > for (off = 0;
> > off < LZO_HEADER + data[thr].cmp_len;
> > off += PAGE_SIZE) {
> > - memcpy(page, data[thr].cmp + off, PAGE_SIZE);
> > + if (handle->crypto) {
> > + /*
> > + * Encrypt the compressed data
> > + * before we write them to the
> > + * block device.
> > + */
> > + ret = crypto_data(data[thr].cmp + off,
> > + PAGE_SIZE,
> > + page,
> > + PAGE_SIZE,
> > + true,
> > + crypto_page_idx);
> > + if (ret)
> > + goto out_finish;
> > + crypto_page_idx++;
> > + } else {
> > + memcpy(page, data[thr].cmp + off,
> > + PAGE_SIZE);
> > + }
> >
> > ret = swap_write_page(handle, page, &hb);
> > if (ret)
> > @@ -909,6 +1006,7 @@ int swsusp_write(unsigned int flags)
> > int error;
> >
> > pages = snapshot_get_image_size();
> > + memset(&handle, 0, sizeof(struct swap_map_handle));
> > error = get_swap_writer(&handle);
> > if (error) {
> > pr_err("Cannot get swap writer\n");
> > @@ -922,6 +1020,9 @@ int swsusp_write(unsigned int flags)
> > }
> > }
> > memset(&snapshot, 0, sizeof(struct snapshot_handle));
> > + if (!crypto_init(true))
> > + /* The image needs to be encrypted. */
> > + handle.crypto = true;
> > error = snapshot_read_next(&snapshot);
> > if (error < PAGE_SIZE) {
> > if (error >= 0)
> > @@ -1059,7 +1160,8 @@ static int load_image(struct swap_map_handle *handle,
> > ktime_t stop;
> > struct hib_bio_batch hb;
> > int err2;
> > - unsigned nr_pages;
> > + unsigned nr_pages, crypto_page_idx;
> > + void *crypt_buf = NULL;
> >
> > hib_init_batch(&hb);
> >
> > @@ -1069,18 +1171,42 @@ static int load_image(struct swap_map_handle *handle,
> > if (!m)
> > m = 1;
> > nr_pages = 0;
> > + crypto_page_idx = 0;
> > + if (handle->crypto) {
> > + crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
> > + if (!crypt_buf)
> > + return -ENOMEM;
> > + }
> > start = ktime_get();
> > for ( ; ; ) {
> > ret = snapshot_write_next(snapshot);
> > if (ret <= 0)
> > break;
> > - ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > + if (handle->crypto)
> > + ret = swap_read_page(handle, crypt_buf, &hb);
> > + else
> > + ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > if (ret)
> > break;
> > if (snapshot->sync_read)
> > ret = hib_wait_io(&hb);
> > if (ret)
> > break;
> > + if (handle->crypto) {
> > + /*
> > + * Need a decryption for the
> > + * data read from the block
> > + * device.
> > + */
> > + ret = crypto_data(crypt_buf, PAGE_SIZE,
> > + data_of(*snapshot),
> > + PAGE_SIZE,
> > + false,
> > + crypto_page_idx);
> > + if (ret)
> > + break;
> > + crypto_page_idx++;
> > + }
> > if (!(nr_pages % m))
> > pr_info("Image loading progress: %3d%%\n",
> > nr_pages / m * 10);
> > @@ -1097,6 +1223,8 @@ static int load_image(struct swap_map_handle *handle,
> > ret = -ENODATA;
> > }
> > swsusp_show_speed(start, stop, nr_to_read, "Read");
> > + if (crypt_buf)
> > + free_page((unsigned long)crypt_buf);
> > return ret;
> > }
> >
> > @@ -1164,7 +1292,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > struct hib_bio_batch hb;
> > ktime_t start;
> > ktime_t stop;
> > - unsigned nr_pages;
> > + unsigned nr_pages, crypto_page_idx;
> > size_t off;
> > unsigned i, thr, run_threads, nr_threads;
> > unsigned ring = 0, pg = 0, ring_size = 0,
> > @@ -1173,6 +1301,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > unsigned char **page = NULL;
> > struct dec_data *data = NULL;
> > struct crc_data *crc = NULL;
> > + void *first_page = NULL;
> >
> > hib_init_batch(&hb);
> >
> > @@ -1278,6 +1407,18 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > }
> > want = ring_size = i;
> >
> > + /*
> > + * The first page of data[thr] contains the length of
> > + * compressed data, this page should not mess up the
> > + * read buffer, so we allocate a separate page for it.
> > + */
> > + if (handle->crypto) {
> > + first_page = (void *)get_zeroed_page(GFP_KERNEL);
> > + if (!first_page) {
> > + ret = -ENOMEM;
> > + goto out_clean;
> > + }
> > + }
> > pr_info("Using %u thread(s) for decompression\n", nr_threads);
> > pr_info("Loading and decompressing image data (%u pages)...\n",
> > nr_to_read);
> > @@ -1285,6 +1426,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > if (!m)
> > m = 1;
> > nr_pages = 0;
> > + crypto_page_idx = 0;
> > start = ktime_get();
> >
> > ret = snapshot_write_next(snapshot);
> > @@ -1336,7 +1478,24 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > }
> >
> > for (thr = 0; have && thr < nr_threads; thr++) {
> > - data[thr].cmp_len = *(size_t *)page[pg];
> > + if (handle->crypto) {
> > + /*
> > + * Need to decrypt the first page
> > + * of each data[thr], which contains
> > + * the compressed data length.
> > + */
> > + ret = crypto_data(page[pg],
> > + PAGE_SIZE,
> > + first_page,
> > + PAGE_SIZE,
> > + false,
> > + crypto_page_idx);
> > + if (ret)
> > + goto out_finish;
> > + data[thr].cmp_len = *(size_t *)first_page;
> > + } else {
> > + data[thr].cmp_len = *(size_t *)page[pg];
> > + }
> > if (unlikely(!data[thr].cmp_len ||
> > data[thr].cmp_len >
> > lzo1x_worst_compress(LZO_UNC_SIZE))) {
> > @@ -1358,8 +1517,26 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > for (off = 0;
> > off < LZO_HEADER + data[thr].cmp_len;
> > off += PAGE_SIZE) {
> > - memcpy(data[thr].cmp + off,
> > - page[pg], PAGE_SIZE);
> > + if (handle->crypto) {
> > + /*
> > + * Decrypt the compressed data
> > + * and leverage the decompression
> > + * threads to get it done.
> > + */
> > + ret = crypto_data(page[pg],
> > + PAGE_SIZE,
> > + data[thr].cmp + off,
> > + PAGE_SIZE,
> > + false,
> > + crypto_page_idx);
> > + if (ret)
> > + goto out_finish;
> > + crypto_page_idx++;
> > + } else {
> > + memcpy(data[thr].cmp + off,
> > + page[pg], PAGE_SIZE);
> > +
> > + }
> > have--;
> > want++;
> > if (++pg >= ring_size)
> > @@ -1452,6 +1629,8 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > out_clean:
> > for (i = 0; i < ring_size; i++)
> > free_page((unsigned long)page[i]);
> > + if (first_page)
> > + free_page((unsigned long)first_page);
> > if (crc) {
> > if (crc->thr)
> > kthread_stop(crc->thr);
> > @@ -1482,6 +1661,7 @@ int swsusp_read(unsigned int *flags_p)
> > struct swsusp_info *header;
> >
> > memset(&snapshot, 0, sizeof(struct snapshot_handle));
> > + memset(&handle, 0, sizeof(struct swap_map_handle));
> > error = snapshot_write_next(&snapshot);
> > if (error < PAGE_SIZE)
> > return error < 0 ? error : -EFAULT;
> > @@ -1489,6 +1669,16 @@ int swsusp_read(unsigned int *flags_p)
> > error = get_swap_reader(&handle, flags_p);
> > if (error)
> > goto end;
> > + if (*flags_p & SF_ENCRYPT_MODE) {
> > + error = crypto_init(false);
> > + if (!error) {
> > + /* The image has been encrypted. */
> > + handle.crypto = true;
> > + } else {
> > + pr_err("Failed to init cipher during resume.\n");
> > + goto end;
> > + }
> > + }
> > if (!error)
> > error = swap_read_page(&handle, header, NULL);
> > if (!error) {
> > @@ -1526,6 +1716,9 @@ int swsusp_check(void)
> >
> > if (!memcmp(HIBERNATE_SIG, swsusp_header->sig, 10)) {
> > memcpy(swsusp_header->sig, swsusp_header->orig_sig, 10);
> > + /* Read salt passed from previous kernel. */
> > + if (swsusp_header->flags & SF_ENCRYPT_MODE)
> > + crypto_restore((void *)&swsusp_header->salt);
> > /* Reset swap signature now */
> > error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
> > swsusp_resume_block,
> > --
> > 2.7.4
> >

2018-06-28 14:31:52

by joeyli

[permalink] [raw]
Subject: Re: [PATCH 2/3][RFC] PM / Hibernate: Encrypt the snapshot pages before submitted to the block device

On Thu, Jun 28, 2018 at 09:50:17PM +0800, Yu Chen wrote:
> Hi,
> On Thu, Jun 28, 2018 at 09:07:20PM +0800, joeyli wrote:
> > Hi Chen Yu,
> >
> > On Wed, Jun 20, 2018 at 05:40:32PM +0800, Chen Yu wrote:
> > > Use the helper functions introduced previously to encrypt
> > > the page data before they are submitted to the block device.
> > > Besides, for the case of hibernation compression, the data
> > > are firstly compressed and then encrypted, and vice versa
> > > for the resume process.
> > >
> >
> > I want to suggest my solution that it direct signs/encrypts the
> > memory snapshot image. This solution is already shipped with
> > SLE12 a couple of years:
> >
> > https://github.com/joeyli/linux-s4sign/commits/s4sign-hmac-encrypted-key-v0.2-v4.17-rc3
> >
> I did not see image page encryption in above link, if I understand

PM / hibernate: Generate and verify signature for snapshot image
https://github.com/joeyli/linux-s4sign/commit/bae39460393ada4c0226dd07cd5e3afcef86b71f

PM / hibernate: snapshot image encryption
https://github.com/joeyli/linux-s4sign/commit/6a9a0113bb221c036ebd0f6321b7191283fe4929

The above patches sign and encrypt the data pages in snapshot image.
It puts the signature to header.

> correctly, your code focus on signation and encrypt the hidden data,
> but not target for the whole snapshot data.

Please ignore the hidden area because we already encrypted snapshot image.
Those data doesn't need to erase from snapshot anymore. I will remove
the hidden area patches in next version.

> > The above patches still need to clean up. I am working on some
> > other bugs, but I can clean up and send out it ASAP.
> >
> > The advantage of this solution is that it produces a signed and
> > encrypted image. Not just for writing to block device by kernel,
> > it also can provide a signed/encrypted image to user space. User
> > space can store the encrypted image to anywhere.
> >
> > I am OK for your user space key generator because I didn't have
> > similar solution yet. I am working on the EFI master key and also
> > want to adapt hibernation to keyring. I will continue the works.
> >
> The user space tool can easily add the keyring support besides
> ioctl if needed.
>

Understood.

Thanks
Joey Lee

> Best,
> Yu
> > Thanks a lot!
> > Joey Lee
> >
> > > Suggested-by: Rafael J. Wysocki <[email protected]>
> > > Cc: Rafael J. Wysocki <[email protected]>
> > > Cc: Pavel Machek <[email protected]>
> > > Cc: Len Brown <[email protected]>
> > > Cc: Borislav Petkov <[email protected]>
> > > Cc: "Lee, Chun-Yi" <[email protected]>
> > > Cc: [email protected]
> > > Cc: [email protected]
> > > Signed-off-by: Chen Yu <[email protected]>
> > > ---
> > > kernel/power/power.h | 1 +
> > > kernel/power/swap.c | 215 ++++++++++++++++++++++++++++++++++++++++++++++++---
> > > 2 files changed, 205 insertions(+), 11 deletions(-)
> > >
> > > diff --git a/kernel/power/power.h b/kernel/power/power.h
> > > index 660aac3..637695c 100644
> > > --- a/kernel/power/power.h
> > > +++ b/kernel/power/power.h
> > > @@ -207,6 +207,7 @@ extern int swsusp_swap_in_use(void);
> > > #define SF_PLATFORM_MODE 1
> > > #define SF_NOCOMPRESS_MODE 2
> > > #define SF_CRC32_MODE 4
> > > +#define SF_ENCRYPT_MODE 8
> > >
> > > /* kernel/power/hibernate.c */
> > > extern int swsusp_check(void);
> > > diff --git a/kernel/power/swap.c b/kernel/power/swap.c
> > > index c2bcf97..2b6b3d0 100644
> > > --- a/kernel/power/swap.c
> > > +++ b/kernel/power/swap.c
> > > @@ -102,14 +102,16 @@ struct swap_map_handle {
> > > unsigned int k;
> > > unsigned long reqd_free_pages;
> > > u32 crc32;
> > > + bool crypto;
> > > };
> > >
> > > struct swsusp_header {
> > > char reserved[PAGE_SIZE - 20 - sizeof(sector_t) - sizeof(int) -
> > > - sizeof(u32)];
> > > + sizeof(u32) - HIBERNATE_SALT_BYTES];
> > > u32 crc32;
> > > sector_t image;
> > > unsigned int flags; /* Flags to pass to the "boot" kernel */
> > > + char salt[HIBERNATE_SALT_BYTES];
> > > char orig_sig[10];
> > > char sig[10];
> > > } __packed;
> > > @@ -127,6 +129,53 @@ struct swsusp_extent {
> > > unsigned long end;
> > > };
> > >
> > > +/* For encryption/decryption. */
> > > +static struct hibernation_crypto *hibernation_crypto_ops;
> > > +
> > > +void set_hibernation_ops(struct hibernation_crypto *ops)
> > > +{
> > > + hibernation_crypto_ops = ops;
> > > +}
> > > +EXPORT_SYMBOL_GPL(set_hibernation_ops);
> > > +
> > > +static int crypto_data(const char *inbuf,
> > > + int inlen,
> > > + char *outbuf,
> > > + int outlen,
> > > + bool encrypt,
> > > + int page_idx)
> > > +{
> > > + if (hibernation_crypto_ops &&
> > > + hibernation_crypto_ops->crypto_data)
> > > + return hibernation_crypto_ops->crypto_data(inbuf,
> > > + inlen, outbuf, outlen, encrypt, page_idx);
> > > + else
> > > + return -EINVAL;
> > > +}
> > > +
> > > +static void crypto_save(void *outbuf)
> > > +{
> > > + if (hibernation_crypto_ops &&
> > > + hibernation_crypto_ops->save)
> > > + hibernation_crypto_ops->save(outbuf);
> > > +}
> > > +
> > > +static void crypto_restore(void *inbuf)
> > > +{
> > > + if (hibernation_crypto_ops &&
> > > + hibernation_crypto_ops->restore)
> > > + hibernation_crypto_ops->restore(inbuf);
> > > +}
> > > +
> > > +static int crypto_init(bool suspend)
> > > +{
> > > + if (hibernation_crypto_ops &&
> > > + hibernation_crypto_ops->init)
> > > + return hibernation_crypto_ops->init(suspend);
> > > + else
> > > + return -EINVAL;
> > > +}
> > > +
> > > static struct rb_root swsusp_extents = RB_ROOT;
> > >
> > > static int swsusp_extents_insert(unsigned long swap_offset)
> > > @@ -318,6 +367,10 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
> > > swsusp_header->flags = flags;
> > > if (flags & SF_CRC32_MODE)
> > > swsusp_header->crc32 = handle->crc32;
> > > + if (handle->crypto) {
> > > + swsusp_header->flags |= SF_ENCRYPT_MODE;
> > > + crypto_save((void *)swsusp_header->salt);
> > > + }
> > > error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
> > > swsusp_resume_block, swsusp_header, NULL);
> > > } else {
> > > @@ -535,11 +588,12 @@ static int save_image(struct swap_map_handle *handle,
> > > {
> > > unsigned int m;
> > > int ret;
> > > - int nr_pages;
> > > + int nr_pages, crypto_page_idx;
> > > int err2;
> > > struct hib_bio_batch hb;
> > > ktime_t start;
> > > ktime_t stop;
> > > + void *tmp = NULL, *crypt_buf = NULL;
> > >
> > > hib_init_batch(&hb);
> > >
> > > @@ -549,12 +603,33 @@ static int save_image(struct swap_map_handle *handle,
> > > if (!m)
> > > m = 1;
> > > nr_pages = 0;
> > > + crypto_page_idx = 0;
> > > + if (handle->crypto) {
> > > + crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
> > > + if (!crypt_buf)
> > > + return -ENOMEM;
> > > + }
> > > +
> > > start = ktime_get();
> > > while (1) {
> > > ret = snapshot_read_next(snapshot);
> > > if (ret <= 0)
> > > break;
> > > - ret = swap_write_page(handle, data_of(*snapshot), &hb);
> > > + tmp = data_of(*snapshot);
> > > + if (handle->crypto) {
> > > + /* Encryption before submit_io.*/
> > > + ret = crypto_data(data_of(*snapshot),
> > > + PAGE_SIZE,
> > > + crypt_buf,
> > > + PAGE_SIZE,
> > > + true,
> > > + crypto_page_idx);
> > > + if (ret)
> > > + goto out;
> > > + crypto_page_idx++;
> > > + tmp = crypt_buf;
> > > + }
> > > + ret = swap_write_page(handle, tmp, &hb);
> > > if (ret)
> > > break;
> > > if (!(nr_pages % m))
> > > @@ -569,6 +644,9 @@ static int save_image(struct swap_map_handle *handle,
> > > if (!ret)
> > > pr_info("Image saving done\n");
> > > swsusp_show_speed(start, stop, nr_to_write, "Wrote");
> > > + out:
> > > + if (crypt_buf)
> > > + free_page((unsigned long)crypt_buf);
> > > return ret;
> > > }
> > >
> > > @@ -671,7 +749,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
> > > {
> > > unsigned int m;
> > > int ret = 0;
> > > - int nr_pages;
> > > + int nr_pages, crypto_page_idx;
> > > int err2;
> > > struct hib_bio_batch hb;
> > > ktime_t start;
> > > @@ -767,6 +845,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
> > > if (!m)
> > > m = 1;
> > > nr_pages = 0;
> > > + crypto_page_idx = 0;
> > > start = ktime_get();
> > > for (;;) {
> > > for (thr = 0; thr < nr_threads; thr++) {
> > > @@ -835,7 +914,25 @@ static int save_image_lzo(struct swap_map_handle *handle,
> > > for (off = 0;
> > > off < LZO_HEADER + data[thr].cmp_len;
> > > off += PAGE_SIZE) {
> > > - memcpy(page, data[thr].cmp + off, PAGE_SIZE);
> > > + if (handle->crypto) {
> > > + /*
> > > + * Encrypt the compressed data
> > > + * before we write them to the
> > > + * block device.
> > > + */
> > > + ret = crypto_data(data[thr].cmp + off,
> > > + PAGE_SIZE,
> > > + page,
> > > + PAGE_SIZE,
> > > + true,
> > > + crypto_page_idx);
> > > + if (ret)
> > > + goto out_finish;
> > > + crypto_page_idx++;
> > > + } else {
> > > + memcpy(page, data[thr].cmp + off,
> > > + PAGE_SIZE);
> > > + }
> > >
> > > ret = swap_write_page(handle, page, &hb);
> > > if (ret)
> > > @@ -909,6 +1006,7 @@ int swsusp_write(unsigned int flags)
> > > int error;
> > >
> > > pages = snapshot_get_image_size();
> > > + memset(&handle, 0, sizeof(struct swap_map_handle));
> > > error = get_swap_writer(&handle);
> > > if (error) {
> > > pr_err("Cannot get swap writer\n");
> > > @@ -922,6 +1020,9 @@ int swsusp_write(unsigned int flags)
> > > }
> > > }
> > > memset(&snapshot, 0, sizeof(struct snapshot_handle));
> > > + if (!crypto_init(true))
> > > + /* The image needs to be encrypted. */
> > > + handle.crypto = true;
> > > error = snapshot_read_next(&snapshot);
> > > if (error < PAGE_SIZE) {
> > > if (error >= 0)
> > > @@ -1059,7 +1160,8 @@ static int load_image(struct swap_map_handle *handle,
> > > ktime_t stop;
> > > struct hib_bio_batch hb;
> > > int err2;
> > > - unsigned nr_pages;
> > > + unsigned nr_pages, crypto_page_idx;
> > > + void *crypt_buf = NULL;
> > >
> > > hib_init_batch(&hb);
> > >
> > > @@ -1069,18 +1171,42 @@ static int load_image(struct swap_map_handle *handle,
> > > if (!m)
> > > m = 1;
> > > nr_pages = 0;
> > > + crypto_page_idx = 0;
> > > + if (handle->crypto) {
> > > + crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
> > > + if (!crypt_buf)
> > > + return -ENOMEM;
> > > + }
> > > start = ktime_get();
> > > for ( ; ; ) {
> > > ret = snapshot_write_next(snapshot);
> > > if (ret <= 0)
> > > break;
> > > - ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > > + if (handle->crypto)
> > > + ret = swap_read_page(handle, crypt_buf, &hb);
> > > + else
> > > + ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > > if (ret)
> > > break;
> > > if (snapshot->sync_read)
> > > ret = hib_wait_io(&hb);
> > > if (ret)
> > > break;
> > > + if (handle->crypto) {
> > > + /*
> > > + * Need a decryption for the
> > > + * data read from the block
> > > + * device.
> > > + */
> > > + ret = crypto_data(crypt_buf, PAGE_SIZE,
> > > + data_of(*snapshot),
> > > + PAGE_SIZE,
> > > + false,
> > > + crypto_page_idx);
> > > + if (ret)
> > > + break;
> > > + crypto_page_idx++;
> > > + }
> > > if (!(nr_pages % m))
> > > pr_info("Image loading progress: %3d%%\n",
> > > nr_pages / m * 10);
> > > @@ -1097,6 +1223,8 @@ static int load_image(struct swap_map_handle *handle,
> > > ret = -ENODATA;
> > > }
> > > swsusp_show_speed(start, stop, nr_to_read, "Read");
> > > + if (crypt_buf)
> > > + free_page((unsigned long)crypt_buf);
> > > return ret;
> > > }
> > >
> > > @@ -1164,7 +1292,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > struct hib_bio_batch hb;
> > > ktime_t start;
> > > ktime_t stop;
> > > - unsigned nr_pages;
> > > + unsigned nr_pages, crypto_page_idx;
> > > size_t off;
> > > unsigned i, thr, run_threads, nr_threads;
> > > unsigned ring = 0, pg = 0, ring_size = 0,
> > > @@ -1173,6 +1301,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > unsigned char **page = NULL;
> > > struct dec_data *data = NULL;
> > > struct crc_data *crc = NULL;
> > > + void *first_page = NULL;
> > >
> > > hib_init_batch(&hb);
> > >
> > > @@ -1278,6 +1407,18 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > }
> > > want = ring_size = i;
> > >
> > > + /*
> > > + * The first page of data[thr] contains the length of
> > > + * compressed data, this page should not mess up the
> > > + * read buffer, so we allocate a separate page for it.
> > > + */
> > > + if (handle->crypto) {
> > > + first_page = (void *)get_zeroed_page(GFP_KERNEL);
> > > + if (!first_page) {
> > > + ret = -ENOMEM;
> > > + goto out_clean;
> > > + }
> > > + }
> > > pr_info("Using %u thread(s) for decompression\n", nr_threads);
> > > pr_info("Loading and decompressing image data (%u pages)...\n",
> > > nr_to_read);
> > > @@ -1285,6 +1426,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > if (!m)
> > > m = 1;
> > > nr_pages = 0;
> > > + crypto_page_idx = 0;
> > > start = ktime_get();
> > >
> > > ret = snapshot_write_next(snapshot);
> > > @@ -1336,7 +1478,24 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > }
> > >
> > > for (thr = 0; have && thr < nr_threads; thr++) {
> > > - data[thr].cmp_len = *(size_t *)page[pg];
> > > + if (handle->crypto) {
> > > + /*
> > > + * Need to decrypt the first page
> > > + * of each data[thr], which contains
> > > + * the compressed data length.
> > > + */
> > > + ret = crypto_data(page[pg],
> > > + PAGE_SIZE,
> > > + first_page,
> > > + PAGE_SIZE,
> > > + false,
> > > + crypto_page_idx);
> > > + if (ret)
> > > + goto out_finish;
> > > + data[thr].cmp_len = *(size_t *)first_page;
> > > + } else {
> > > + data[thr].cmp_len = *(size_t *)page[pg];
> > > + }
> > > if (unlikely(!data[thr].cmp_len ||
> > > data[thr].cmp_len >
> > > lzo1x_worst_compress(LZO_UNC_SIZE))) {
> > > @@ -1358,8 +1517,26 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > for (off = 0;
> > > off < LZO_HEADER + data[thr].cmp_len;
> > > off += PAGE_SIZE) {
> > > - memcpy(data[thr].cmp + off,
> > > - page[pg], PAGE_SIZE);
> > > + if (handle->crypto) {
> > > + /*
> > > + * Decrypt the compressed data
> > > + * and leverage the decompression
> > > + * threads to get it done.
> > > + */
> > > + ret = crypto_data(page[pg],
> > > + PAGE_SIZE,
> > > + data[thr].cmp + off,
> > > + PAGE_SIZE,
> > > + false,
> > > + crypto_page_idx);
> > > + if (ret)
> > > + goto out_finish;
> > > + crypto_page_idx++;
> > > + } else {
> > > + memcpy(data[thr].cmp + off,
> > > + page[pg], PAGE_SIZE);
> > > +
> > > + }
> > > have--;
> > > want++;
> > > if (++pg >= ring_size)
> > > @@ -1452,6 +1629,8 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > out_clean:
> > > for (i = 0; i < ring_size; i++)
> > > free_page((unsigned long)page[i]);
> > > + if (first_page)
> > > + free_page((unsigned long)first_page);
> > > if (crc) {
> > > if (crc->thr)
> > > kthread_stop(crc->thr);
> > > @@ -1482,6 +1661,7 @@ int swsusp_read(unsigned int *flags_p)
> > > struct swsusp_info *header;
> > >
> > > memset(&snapshot, 0, sizeof(struct snapshot_handle));
> > > + memset(&handle, 0, sizeof(struct swap_map_handle));
> > > error = snapshot_write_next(&snapshot);
> > > if (error < PAGE_SIZE)
> > > return error < 0 ? error : -EFAULT;
> > > @@ -1489,6 +1669,16 @@ int swsusp_read(unsigned int *flags_p)
> > > error = get_swap_reader(&handle, flags_p);
> > > if (error)
> > > goto end;
> > > + if (*flags_p & SF_ENCRYPT_MODE) {
> > > + error = crypto_init(false);
> > > + if (!error) {
> > > + /* The image has been encrypted. */
> > > + handle.crypto = true;
> > > + } else {
> > > + pr_err("Failed to init cipher during resume.\n");
> > > + goto end;
> > > + }
> > > + }
> > > if (!error)
> > > error = swap_read_page(&handle, header, NULL);
> > > if (!error) {
> > > @@ -1526,6 +1716,9 @@ int swsusp_check(void)
> > >
> > > if (!memcmp(HIBERNATE_SIG, swsusp_header->sig, 10)) {
> > > memcpy(swsusp_header->sig, swsusp_header->orig_sig, 10);
> > > + /* Read salt passed from previous kernel. */
> > > + if (swsusp_header->flags & SF_ENCRYPT_MODE)
> > > + crypto_restore((void *)&swsusp_header->salt);
> > > /* Reset swap signature now */
> > > error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
> > > swsusp_resume_block,
> > > --
> > > 2.7.4
> > >

2018-06-28 21:53:07

by Chen Yu

[permalink] [raw]
Subject: Re: [PATCH 2/3][RFC] PM / Hibernate: Encrypt the snapshot pages before submitted to the block device

Hi,
On Thu, Jun 28, 2018 at 10:28:56PM +0800, joeyli wrote:
> On Thu, Jun 28, 2018 at 09:50:17PM +0800, Yu Chen wrote:
> > Hi,
> > On Thu, Jun 28, 2018 at 09:07:20PM +0800, joeyli wrote:
> > > Hi Chen Yu,
> > >
> > > On Wed, Jun 20, 2018 at 05:40:32PM +0800, Chen Yu wrote:
> > > > Use the helper functions introduced previously to encrypt
> > > > the page data before they are submitted to the block device.
> > > > Besides, for the case of hibernation compression, the data
> > > > are firstly compressed and then encrypted, and vice versa
> > > > for the resume process.
> > > >
> > >
> > > I want to suggest my solution that it direct signs/encrypts the
> > > memory snapshot image. This solution is already shipped with
> > > SLE12 a couple of years:
> > >
> > > https://github.com/joeyli/linux-s4sign/commits/s4sign-hmac-encrypted-key-v0.2-v4.17-rc3
> > >
> > I did not see image page encryption in above link, if I understand
>
> PM / hibernate: Generate and verify signature for snapshot image
> https://github.com/joeyli/linux-s4sign/commit/bae39460393ada4c0226dd07cd5e3afcef86b71f
>
> PM / hibernate: snapshot image encryption
> https://github.com/joeyli/linux-s4sign/commit/6a9a0113bb221c036ebd0f6321b7191283fe4929
>
> The above patches sign and encrypt the data pages in snapshot image.
> It puts the signature to header.
>
It looks like your signature code can be simplyly added on top of the
solution we proposed here, how about we collaborating on this task?
just my 2 cents,
1. The cryption code should be indepent of the snapshot code, and
this is why we implement it as a kernel module for that in PATCH[1/3].
2. There's no need to traverse the snapshot image twice, if the
image is large(there's requirement on servers now) we can
simplyly do the encryption before the disk IO, and this is
why PATCH[2/3] looks like this.
> > correctly, your code focus on signation and encrypt the hidden data,
> > but not target for the whole snapshot data.
>
> Please ignore the hidden area because we already encrypted snapshot image.
> Those data doesn't need to erase from snapshot anymore. I will remove
> the hidden area patches in next version.
>
> > > The above patches still need to clean up. I am working on some
> > > other bugs, but I can clean up and send out it ASAP.
> > >
> > > The advantage of this solution is that it produces a signed and
> > > encrypted image. Not just for writing to block device by kernel,
> > > it also can provide a signed/encrypted image to user space. User
> > > space can store the encrypted image to anywhere.
> > >
> > > I am OK for your user space key generator because I didn't have
> > > similar solution yet. I am working on the EFI master key and also
> > > want to adapt hibernation to keyring. I will continue the works.
> > >
> > The user space tool can easily add the keyring support besides
> > ioctl if needed.
> >
>
> Understood.
>
> Thanks
> Joey Lee
>
Best,
Yu
> > Best,
> > Yu
> > > Thanks a lot!
> > > Joey Lee
> > >
> > > > Suggested-by: Rafael J. Wysocki <[email protected]>
> > > > Cc: Rafael J. Wysocki <[email protected]>
> > > > Cc: Pavel Machek <[email protected]>
> > > > Cc: Len Brown <[email protected]>
> > > > Cc: Borislav Petkov <[email protected]>
> > > > Cc: "Lee, Chun-Yi" <[email protected]>
> > > > Cc: [email protected]
> > > > Cc: [email protected]
> > > > Signed-off-by: Chen Yu <[email protected]>
> > > > ---
> > > > kernel/power/power.h | 1 +
> > > > kernel/power/swap.c | 215 ++++++++++++++++++++++++++++++++++++++++++++++++---
> > > > 2 files changed, 205 insertions(+), 11 deletions(-)
> > > >
> > > > diff --git a/kernel/power/power.h b/kernel/power/power.h
> > > > index 660aac3..637695c 100644
> > > > --- a/kernel/power/power.h
> > > > +++ b/kernel/power/power.h
> > > > @@ -207,6 +207,7 @@ extern int swsusp_swap_in_use(void);
> > > > #define SF_PLATFORM_MODE 1
> > > > #define SF_NOCOMPRESS_MODE 2
> > > > #define SF_CRC32_MODE 4
> > > > +#define SF_ENCRYPT_MODE 8
> > > >
> > > > /* kernel/power/hibernate.c */
> > > > extern int swsusp_check(void);
> > > > diff --git a/kernel/power/swap.c b/kernel/power/swap.c
> > > > index c2bcf97..2b6b3d0 100644
> > > > --- a/kernel/power/swap.c
> > > > +++ b/kernel/power/swap.c
> > > > @@ -102,14 +102,16 @@ struct swap_map_handle {
> > > > unsigned int k;
> > > > unsigned long reqd_free_pages;
> > > > u32 crc32;
> > > > + bool crypto;
> > > > };
> > > >
> > > > struct swsusp_header {
> > > > char reserved[PAGE_SIZE - 20 - sizeof(sector_t) - sizeof(int) -
> > > > - sizeof(u32)];
> > > > + sizeof(u32) - HIBERNATE_SALT_BYTES];
> > > > u32 crc32;
> > > > sector_t image;
> > > > unsigned int flags; /* Flags to pass to the "boot" kernel */
> > > > + char salt[HIBERNATE_SALT_BYTES];
> > > > char orig_sig[10];
> > > > char sig[10];
> > > > } __packed;
> > > > @@ -127,6 +129,53 @@ struct swsusp_extent {
> > > > unsigned long end;
> > > > };
> > > >
> > > > +/* For encryption/decryption. */
> > > > +static struct hibernation_crypto *hibernation_crypto_ops;
> > > > +
> > > > +void set_hibernation_ops(struct hibernation_crypto *ops)
> > > > +{
> > > > + hibernation_crypto_ops = ops;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(set_hibernation_ops);
> > > > +
> > > > +static int crypto_data(const char *inbuf,
> > > > + int inlen,
> > > > + char *outbuf,
> > > > + int outlen,
> > > > + bool encrypt,
> > > > + int page_idx)
> > > > +{
> > > > + if (hibernation_crypto_ops &&
> > > > + hibernation_crypto_ops->crypto_data)
> > > > + return hibernation_crypto_ops->crypto_data(inbuf,
> > > > + inlen, outbuf, outlen, encrypt, page_idx);
> > > > + else
> > > > + return -EINVAL;
> > > > +}
> > > > +
> > > > +static void crypto_save(void *outbuf)
> > > > +{
> > > > + if (hibernation_crypto_ops &&
> > > > + hibernation_crypto_ops->save)
> > > > + hibernation_crypto_ops->save(outbuf);
> > > > +}
> > > > +
> > > > +static void crypto_restore(void *inbuf)
> > > > +{
> > > > + if (hibernation_crypto_ops &&
> > > > + hibernation_crypto_ops->restore)
> > > > + hibernation_crypto_ops->restore(inbuf);
> > > > +}
> > > > +
> > > > +static int crypto_init(bool suspend)
> > > > +{
> > > > + if (hibernation_crypto_ops &&
> > > > + hibernation_crypto_ops->init)
> > > > + return hibernation_crypto_ops->init(suspend);
> > > > + else
> > > > + return -EINVAL;
> > > > +}
> > > > +
> > > > static struct rb_root swsusp_extents = RB_ROOT;
> > > >
> > > > static int swsusp_extents_insert(unsigned long swap_offset)
> > > > @@ -318,6 +367,10 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags)
> > > > swsusp_header->flags = flags;
> > > > if (flags & SF_CRC32_MODE)
> > > > swsusp_header->crc32 = handle->crc32;
> > > > + if (handle->crypto) {
> > > > + swsusp_header->flags |= SF_ENCRYPT_MODE;
> > > > + crypto_save((void *)swsusp_header->salt);
> > > > + }
> > > > error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
> > > > swsusp_resume_block, swsusp_header, NULL);
> > > > } else {
> > > > @@ -535,11 +588,12 @@ static int save_image(struct swap_map_handle *handle,
> > > > {
> > > > unsigned int m;
> > > > int ret;
> > > > - int nr_pages;
> > > > + int nr_pages, crypto_page_idx;
> > > > int err2;
> > > > struct hib_bio_batch hb;
> > > > ktime_t start;
> > > > ktime_t stop;
> > > > + void *tmp = NULL, *crypt_buf = NULL;
> > > >
> > > > hib_init_batch(&hb);
> > > >
> > > > @@ -549,12 +603,33 @@ static int save_image(struct swap_map_handle *handle,
> > > > if (!m)
> > > > m = 1;
> > > > nr_pages = 0;
> > > > + crypto_page_idx = 0;
> > > > + if (handle->crypto) {
> > > > + crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
> > > > + if (!crypt_buf)
> > > > + return -ENOMEM;
> > > > + }
> > > > +
> > > > start = ktime_get();
> > > > while (1) {
> > > > ret = snapshot_read_next(snapshot);
> > > > if (ret <= 0)
> > > > break;
> > > > - ret = swap_write_page(handle, data_of(*snapshot), &hb);
> > > > + tmp = data_of(*snapshot);
> > > > + if (handle->crypto) {
> > > > + /* Encryption before submit_io.*/
> > > > + ret = crypto_data(data_of(*snapshot),
> > > > + PAGE_SIZE,
> > > > + crypt_buf,
> > > > + PAGE_SIZE,
> > > > + true,
> > > > + crypto_page_idx);
> > > > + if (ret)
> > > > + goto out;
> > > > + crypto_page_idx++;
> > > > + tmp = crypt_buf;
> > > > + }
> > > > + ret = swap_write_page(handle, tmp, &hb);
> > > > if (ret)
> > > > break;
> > > > if (!(nr_pages % m))
> > > > @@ -569,6 +644,9 @@ static int save_image(struct swap_map_handle *handle,
> > > > if (!ret)
> > > > pr_info("Image saving done\n");
> > > > swsusp_show_speed(start, stop, nr_to_write, "Wrote");
> > > > + out:
> > > > + if (crypt_buf)
> > > > + free_page((unsigned long)crypt_buf);
> > > > return ret;
> > > > }
> > > >
> > > > @@ -671,7 +749,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
> > > > {
> > > > unsigned int m;
> > > > int ret = 0;
> > > > - int nr_pages;
> > > > + int nr_pages, crypto_page_idx;
> > > > int err2;
> > > > struct hib_bio_batch hb;
> > > > ktime_t start;
> > > > @@ -767,6 +845,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
> > > > if (!m)
> > > > m = 1;
> > > > nr_pages = 0;
> > > > + crypto_page_idx = 0;
> > > > start = ktime_get();
> > > > for (;;) {
> > > > for (thr = 0; thr < nr_threads; thr++) {
> > > > @@ -835,7 +914,25 @@ static int save_image_lzo(struct swap_map_handle *handle,
> > > > for (off = 0;
> > > > off < LZO_HEADER + data[thr].cmp_len;
> > > > off += PAGE_SIZE) {
> > > > - memcpy(page, data[thr].cmp + off, PAGE_SIZE);
> > > > + if (handle->crypto) {
> > > > + /*
> > > > + * Encrypt the compressed data
> > > > + * before we write them to the
> > > > + * block device.
> > > > + */
> > > > + ret = crypto_data(data[thr].cmp + off,
> > > > + PAGE_SIZE,
> > > > + page,
> > > > + PAGE_SIZE,
> > > > + true,
> > > > + crypto_page_idx);
> > > > + if (ret)
> > > > + goto out_finish;
> > > > + crypto_page_idx++;
> > > > + } else {
> > > > + memcpy(page, data[thr].cmp + off,
> > > > + PAGE_SIZE);
> > > > + }
> > > >
> > > > ret = swap_write_page(handle, page, &hb);
> > > > if (ret)
> > > > @@ -909,6 +1006,7 @@ int swsusp_write(unsigned int flags)
> > > > int error;
> > > >
> > > > pages = snapshot_get_image_size();
> > > > + memset(&handle, 0, sizeof(struct swap_map_handle));
> > > > error = get_swap_writer(&handle);
> > > > if (error) {
> > > > pr_err("Cannot get swap writer\n");
> > > > @@ -922,6 +1020,9 @@ int swsusp_write(unsigned int flags)
> > > > }
> > > > }
> > > > memset(&snapshot, 0, sizeof(struct snapshot_handle));
> > > > + if (!crypto_init(true))
> > > > + /* The image needs to be encrypted. */
> > > > + handle.crypto = true;
> > > > error = snapshot_read_next(&snapshot);
> > > > if (error < PAGE_SIZE) {
> > > > if (error >= 0)
> > > > @@ -1059,7 +1160,8 @@ static int load_image(struct swap_map_handle *handle,
> > > > ktime_t stop;
> > > > struct hib_bio_batch hb;
> > > > int err2;
> > > > - unsigned nr_pages;
> > > > + unsigned nr_pages, crypto_page_idx;
> > > > + void *crypt_buf = NULL;
> > > >
> > > > hib_init_batch(&hb);
> > > >
> > > > @@ -1069,18 +1171,42 @@ static int load_image(struct swap_map_handle *handle,
> > > > if (!m)
> > > > m = 1;
> > > > nr_pages = 0;
> > > > + crypto_page_idx = 0;
> > > > + if (handle->crypto) {
> > > > + crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
> > > > + if (!crypt_buf)
> > > > + return -ENOMEM;
> > > > + }
> > > > start = ktime_get();
> > > > for ( ; ; ) {
> > > > ret = snapshot_write_next(snapshot);
> > > > if (ret <= 0)
> > > > break;
> > > > - ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > > > + if (handle->crypto)
> > > > + ret = swap_read_page(handle, crypt_buf, &hb);
> > > > + else
> > > > + ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > > > if (ret)
> > > > break;
> > > > if (snapshot->sync_read)
> > > > ret = hib_wait_io(&hb);
> > > > if (ret)
> > > > break;
> > > > + if (handle->crypto) {
> > > > + /*
> > > > + * Need a decryption for the
> > > > + * data read from the block
> > > > + * device.
> > > > + */
> > > > + ret = crypto_data(crypt_buf, PAGE_SIZE,
> > > > + data_of(*snapshot),
> > > > + PAGE_SIZE,
> > > > + false,
> > > > + crypto_page_idx);
> > > > + if (ret)
> > > > + break;
> > > > + crypto_page_idx++;
> > > > + }
> > > > if (!(nr_pages % m))
> > > > pr_info("Image loading progress: %3d%%\n",
> > > > nr_pages / m * 10);
> > > > @@ -1097,6 +1223,8 @@ static int load_image(struct swap_map_handle *handle,
> > > > ret = -ENODATA;
> > > > }
> > > > swsusp_show_speed(start, stop, nr_to_read, "Read");
> > > > + if (crypt_buf)
> > > > + free_page((unsigned long)crypt_buf);
> > > > return ret;
> > > > }
> > > >
> > > > @@ -1164,7 +1292,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > > struct hib_bio_batch hb;
> > > > ktime_t start;
> > > > ktime_t stop;
> > > > - unsigned nr_pages;
> > > > + unsigned nr_pages, crypto_page_idx;
> > > > size_t off;
> > > > unsigned i, thr, run_threads, nr_threads;
> > > > unsigned ring = 0, pg = 0, ring_size = 0,
> > > > @@ -1173,6 +1301,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > > unsigned char **page = NULL;
> > > > struct dec_data *data = NULL;
> > > > struct crc_data *crc = NULL;
> > > > + void *first_page = NULL;
> > > >
> > > > hib_init_batch(&hb);
> > > >
> > > > @@ -1278,6 +1407,18 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > > }
> > > > want = ring_size = i;
> > > >
> > > > + /*
> > > > + * The first page of data[thr] contains the length of
> > > > + * compressed data, this page should not mess up the
> > > > + * read buffer, so we allocate a separate page for it.
> > > > + */
> > > > + if (handle->crypto) {
> > > > + first_page = (void *)get_zeroed_page(GFP_KERNEL);
> > > > + if (!first_page) {
> > > > + ret = -ENOMEM;
> > > > + goto out_clean;
> > > > + }
> > > > + }
> > > > pr_info("Using %u thread(s) for decompression\n", nr_threads);
> > > > pr_info("Loading and decompressing image data (%u pages)...\n",
> > > > nr_to_read);
> > > > @@ -1285,6 +1426,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > > if (!m)
> > > > m = 1;
> > > > nr_pages = 0;
> > > > + crypto_page_idx = 0;
> > > > start = ktime_get();
> > > >
> > > > ret = snapshot_write_next(snapshot);
> > > > @@ -1336,7 +1478,24 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > > }
> > > >
> > > > for (thr = 0; have && thr < nr_threads; thr++) {
> > > > - data[thr].cmp_len = *(size_t *)page[pg];
> > > > + if (handle->crypto) {
> > > > + /*
> > > > + * Need to decrypt the first page
> > > > + * of each data[thr], which contains
> > > > + * the compressed data length.
> > > > + */
> > > > + ret = crypto_data(page[pg],
> > > > + PAGE_SIZE,
> > > > + first_page,
> > > > + PAGE_SIZE,
> > > > + false,
> > > > + crypto_page_idx);
> > > > + if (ret)
> > > > + goto out_finish;
> > > > + data[thr].cmp_len = *(size_t *)first_page;
> > > > + } else {
> > > > + data[thr].cmp_len = *(size_t *)page[pg];
> > > > + }
> > > > if (unlikely(!data[thr].cmp_len ||
> > > > data[thr].cmp_len >
> > > > lzo1x_worst_compress(LZO_UNC_SIZE))) {
> > > > @@ -1358,8 +1517,26 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > > for (off = 0;
> > > > off < LZO_HEADER + data[thr].cmp_len;
> > > > off += PAGE_SIZE) {
> > > > - memcpy(data[thr].cmp + off,
> > > > - page[pg], PAGE_SIZE);
> > > > + if (handle->crypto) {
> > > > + /*
> > > > + * Decrypt the compressed data
> > > > + * and leverage the decompression
> > > > + * threads to get it done.
> > > > + */
> > > > + ret = crypto_data(page[pg],
> > > > + PAGE_SIZE,
> > > > + data[thr].cmp + off,
> > > > + PAGE_SIZE,
> > > > + false,
> > > > + crypto_page_idx);
> > > > + if (ret)
> > > > + goto out_finish;
> > > > + crypto_page_idx++;
> > > > + } else {
> > > > + memcpy(data[thr].cmp + off,
> > > > + page[pg], PAGE_SIZE);
> > > > +
> > > > + }
> > > > have--;
> > > > want++;
> > > > if (++pg >= ring_size)
> > > > @@ -1452,6 +1629,8 @@ static int load_image_lzo(struct swap_map_handle *handle,
> > > > out_clean:
> > > > for (i = 0; i < ring_size; i++)
> > > > free_page((unsigned long)page[i]);
> > > > + if (first_page)
> > > > + free_page((unsigned long)first_page);
> > > > if (crc) {
> > > > if (crc->thr)
> > > > kthread_stop(crc->thr);
> > > > @@ -1482,6 +1661,7 @@ int swsusp_read(unsigned int *flags_p)
> > > > struct swsusp_info *header;
> > > >
> > > > memset(&snapshot, 0, sizeof(struct snapshot_handle));
> > > > + memset(&handle, 0, sizeof(struct swap_map_handle));
> > > > error = snapshot_write_next(&snapshot);
> > > > if (error < PAGE_SIZE)
> > > > return error < 0 ? error : -EFAULT;
> > > > @@ -1489,6 +1669,16 @@ int swsusp_read(unsigned int *flags_p)
> > > > error = get_swap_reader(&handle, flags_p);
> > > > if (error)
> > > > goto end;
> > > > + if (*flags_p & SF_ENCRYPT_MODE) {
> > > > + error = crypto_init(false);
> > > > + if (!error) {
> > > > + /* The image has been encrypted. */
> > > > + handle.crypto = true;
> > > > + } else {
> > > > + pr_err("Failed to init cipher during resume.\n");
> > > > + goto end;
> > > > + }
> > > > + }
> > > > if (!error)
> > > > error = swap_read_page(&handle, header, NULL);
> > > > if (!error) {
> > > > @@ -1526,6 +1716,9 @@ int swsusp_check(void)
> > > >
> > > > if (!memcmp(HIBERNATE_SIG, swsusp_header->sig, 10)) {
> > > > memcpy(swsusp_header->sig, swsusp_header->orig_sig, 10);
> > > > + /* Read salt passed from previous kernel. */
> > > > + if (swsusp_header->flags & SF_ENCRYPT_MODE)
> > > > + crypto_restore((void *)&swsusp_header->salt);
> > > > /* Reset swap signature now */
> > > > error = hib_submit_io(REQ_OP_WRITE, REQ_SYNC,
> > > > swsusp_resume_block,
> > > > --
> > > > 2.7.4
> > > >

2018-06-29 14:22:52

by joeyli

[permalink] [raw]
Subject: Re: [PATCH 2/3][RFC] PM / Hibernate: Encrypt the snapshot pages before submitted to the block device

On Thu, Jun 28, 2018 at 10:52:07PM +0800, Yu Chen wrote:
> Hi,
> On Thu, Jun 28, 2018 at 10:28:56PM +0800, joeyli wrote:
> > On Thu, Jun 28, 2018 at 09:50:17PM +0800, Yu Chen wrote:
> > > Hi,
> > > On Thu, Jun 28, 2018 at 09:07:20PM +0800, joeyli wrote:
> > > > Hi Chen Yu,
> > > >
> > > > On Wed, Jun 20, 2018 at 05:40:32PM +0800, Chen Yu wrote:
> > > > > Use the helper functions introduced previously to encrypt
> > > > > the page data before they are submitted to the block device.
> > > > > Besides, for the case of hibernation compression, the data
> > > > > are firstly compressed and then encrypted, and vice versa
> > > > > for the resume process.
> > > > >
> > > >
> > > > I want to suggest my solution that it direct signs/encrypts the
> > > > memory snapshot image. This solution is already shipped with
> > > > SLE12 a couple of years:
> > > >
> > > > https://github.com/joeyli/linux-s4sign/commits/s4sign-hmac-encrypted-key-v0.2-v4.17-rc3
> > > >
> > > I did not see image page encryption in above link, if I understand
> >
> > PM / hibernate: Generate and verify signature for snapshot image
> > https://github.com/joeyli/linux-s4sign/commit/bae39460393ada4c0226dd07cd5e3afcef86b71f
> >
> > PM / hibernate: snapshot image encryption
> > https://github.com/joeyli/linux-s4sign/commit/6a9a0113bb221c036ebd0f6321b7191283fe4929
> >
> > The above patches sign and encrypt the data pages in snapshot image.
> > It puts the signature to header.
> >
> It looks like your signature code can be simplyly added on top of the
> solution we proposed here, how about we collaborating on this task?

OK, I will base on your user key solution to respin my signature patches.

> just my 2 cents,
> 1. The cryption code should be indepent of the snapshot code, and
> this is why we implement it as a kernel module for that in PATCH[1/3].

Why the cryption code must be indepent of snapshot code?

> 2. There's no need to traverse the snapshot image twice, if the
> image is large(there's requirement on servers now) we can
> simplyly do the encryption before the disk IO, and this is
> why PATCH[2/3] looks like this.

If the encryption solution is only for block device, then the uswsusp
interface must be locked-down when kernel is in locked mode:

uswsusp: Disable when the kernel is locked down
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/commit/?h=lockdown-20180410&id=8732c1663d7c0305ae01ba5a1ee4d2299b7b4612

I still suggest to keep the solution to direct encript the snapshot
image for uswsusp because the snapshot can be encrypted by kernel
before user space get it.

I mean that if the uswsusp be used, then kernel direct encrypts the
snapshot image, otherwise kernel encrypts pages before block io.

On the other hand, I have a question about asynchronous block io.
Please see below...

> > > > > Suggested-by: Rafael J. Wysocki <[email protected]>
> > > > > Cc: Rafael J. Wysocki <[email protected]>
> > > > > Cc: Pavel Machek <[email protected]>
> > > > > Cc: Len Brown <[email protected]>
> > > > > Cc: Borislav Petkov <[email protected]>
> > > > > Cc: "Lee, Chun-Yi" <[email protected]>
> > > > > Cc: [email protected]
> > > > > Cc: [email protected]
> > > > > Signed-off-by: Chen Yu <[email protected]>
> > > > > ---
> > > > > kernel/power/power.h | 1 +
> > > > > kernel/power/swap.c | 215 ++++++++++++++++++++++++++++++++++++++++++++++++---
> > > > > 2 files changed, 205 insertions(+), 11 deletions(-)
[...snip]
> > > > > /* kernel/power/hibernate.c */
> > > > > extern int swsusp_check(void);
> > > > > diff --git a/kernel/power/swap.c b/kernel/power/swap.c
> > > > > index c2bcf97..2b6b3d0 100644
> > > > > --- a/kernel/power/swap.c
> > > > > +++ b/kernel/power/swap.c
[...snip]
> > > > > @@ -1069,18 +1171,42 @@ static int load_image(struct swap_map_handle *handle,
> > > > > if (!m)
> > > > > m = 1;
> > > > > nr_pages = 0;
> > > > > + crypto_page_idx = 0;
> > > > > + if (handle->crypto) {
> > > > > + crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
> > > > > + if (!crypt_buf)
> > > > > + return -ENOMEM;
> > > > > + }
> > > > > start = ktime_get();
> > > > > for ( ; ; ) {
> > > > > ret = snapshot_write_next(snapshot);
> > > > > if (ret <= 0)
> > > > > break;
> > > > > - ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > > > > + if (handle->crypto)
> > > > > + ret = swap_read_page(handle, crypt_buf, &hb);
> > > > > + else
> > > > > + ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > > > > if (ret)
> > > > > break;
> > > > > if (snapshot->sync_read)
> > > > > ret = hib_wait_io(&hb);

In snapshot_write_next(), the logic will clean the snapshot->sync_read
when the buffer page doesn't equal to the original page. Which means
that the page can be read by asynchronous block io. Otherwise, kernel
calls hib_wait_io() to wait until the block io was done.

> > > > > if (ret)
> > > > > break;
> > > > > + if (handle->crypto) {
> > > > > + /*
> > > > > + * Need a decryption for the
> > > > > + * data read from the block
> > > > > + * device.
> > > > > + */
> > > > > + ret = crypto_data(crypt_buf, PAGE_SIZE,
> > > > > + data_of(*snapshot),
> > > > > + PAGE_SIZE,
> > > > > + false,
> > > > > + crypto_page_idx);
> > > > > + if (ret)
> > > > > + break;
> > > > > + crypto_page_idx++;
> > > > > + }

The decryption is here in the for-loop. But maybe the page is still in
the block io queue for waiting the batch read? The page content is not
really read to memory when the crypto_data be run?

> > > > > if (!(nr_pages % m))
> > > > > pr_info("Image loading progress: %3d%%\n",
> > > > > nr_pages / m * 10);
nr_pages++;
}
err2 = hib_wait_io(&hb);
stop = ktime_get();

When the for-loop is break, the above hib_wait_io(&hb) guarantees that
all asynchronous block io are done. Then all pages are read to memory.

I think that the decryption code must be moved after for-loop be break.
Or there have any callback hook in the asynchronous block io that we
can put the encryption code after the block io read the page.

Thanks a lot!
Joey Lee

2018-07-05 16:17:53

by joeyli

[permalink] [raw]
Subject: Re: [PATCH 0/3][RFC] Introduce the in-kernel hibernation encryption

Hi Chen Yu,

On Wed, Jun 20, 2018 at 05:39:37PM +0800, Chen Yu wrote:
> Hi,
> As security becomes more and more important, we add the in-kernel
> encryption support for hibernation.
>
> This prototype is a trial version to implement the hibernation
> encryption in the kernel, so that the users do not have to rely
> on third-party tools to encrypt the hibernation image. The only
> dependency on user space is that, the user space should provide
> a valid key derived from passphrase to the kernel for image encryption.
>
> There was a discussion on the mailing list on whether this key should
> be derived in kernel or in user space. And it turns out to be generating
> the key by user space is more acceptable[1]. So this patch set is divided
> into two parts:
> 1. The hibernation snapshot encryption in kernel space,
> 2. the key derivation implementation in user space.
>
> Please refer to each patch for detail, and feel free to comment on
> this, thanks.
>
> [1] https://www.spinics.net/lists/linux-crypto/msg33145.html
>
> Chen Yu (3):
> PM / Hibernate: Add helper functions for hibernation encryption
> PM / Hibernate: Encrypt the snapshot pages before submitted to the
> block device
> tools: create power/crypto utility
>

I am trying this patch set.

Could you please tell me how to test the user space crypto utility with
systemd's hibernation module?

I have a question about the salt. If the salt is saved in image header,
does that mean that kernel needs to read the image header before user
space crypto utility be launched? Otherwise user space can not get
the salt to produce key? I a bit confused about the resume process.

Thanks
Joey Lee

2018-07-06 13:38:33

by Chen Yu

[permalink] [raw]
Subject: Re: [PATCH 0/3][RFC] Introduce the in-kernel hibernation encryption

Sorry for late reply.
On Fri, Jul 06, 2018 at 12:16:37AM +0800, joeyli wrote:
> Hi Chen Yu,
>
> On Wed, Jun 20, 2018 at 05:39:37PM +0800, Chen Yu wrote:
> > Hi,
> > As security becomes more and more important, we add the in-kernel
> > encryption support for hibernation.
> >
> > This prototype is a trial version to implement the hibernation
> > encryption in the kernel, so that the users do not have to rely
> > on third-party tools to encrypt the hibernation image. The only
> > dependency on user space is that, the user space should provide
> > a valid key derived from passphrase to the kernel for image encryption.
> >
> > There was a discussion on the mailing list on whether this key should
> > be derived in kernel or in user space. And it turns out to be generating
> > the key by user space is more acceptable[1]. So this patch set is divided
> > into two parts:
> > 1. The hibernation snapshot encryption in kernel space,
> > 2. the key derivation implementation in user space.
> >
> > Please refer to each patch for detail, and feel free to comment on
> > this, thanks.
> >
> > [1] https://www.spinics.net/lists/linux-crypto/msg33145.html
> >
> > Chen Yu (3):
> > PM / Hibernate: Add helper functions for hibernation encryption
> > PM / Hibernate: Encrypt the snapshot pages before submitted to the
> > block device
> > tools: create power/crypto utility
> >
>
> I am trying this patch set.
>
> Could you please tell me how to test the user space crypto utility with
> systemd's hibernation module?
>
Usage:
1. install the kernel module:
modprobe crypto_hibernation
2. run the tool to generate the key from
user provided passphrase:
./crypto_hibernate
3. launch the hibernation process:
echo disk > /sys/power/state
4. The initrd launches cryto_hibernate
to read previous salt from kernel and
probe the user passphrase and generate the same key:
./crypto_hibernate
5. kernel uses this key to decrypt the hibernation
snapshot.
> I have a question about the salt. If the salt is saved in image header,
> does that mean that kernel needs to read the image header before user
> space crypto utility be launched? Otherwise user space can not get
> the salt to produce key? I a bit confused about the resume process.
>
The crypto_hibernate will first read the salt from the kernel
via ioctl(the kernel will first expose the salt for the user
in crypto_restore(), then the crypto_hibernate uses ioctl to
read it) and then uses that salt together with user provided
passphrase to generate the key, and pass that key to the kernel
for decryption.
> Thanks
> Joey Lee

2018-07-06 15:24:30

by Chen Yu

[permalink] [raw]
Subject: Re: [PATCH 2/3][RFC] PM / Hibernate: Encrypt the snapshot pages before submitted to the block device

Hi Joey Lee,
On Fri, Jun 29, 2018 at 08:59:43PM +0800, joeyli wrote:
> On Thu, Jun 28, 2018 at 10:52:07PM +0800, Yu Chen wrote:
> > Hi,
> > On Thu, Jun 28, 2018 at 10:28:56PM +0800, joeyli wrote:
> > > On Thu, Jun 28, 2018 at 09:50:17PM +0800, Yu Chen wrote:
> > > > Hi,
> > > > On Thu, Jun 28, 2018 at 09:07:20PM +0800, joeyli wrote:
> > > > > Hi Chen Yu,
> > > > >
> > > > > On Wed, Jun 20, 2018 at 05:40:32PM +0800, Chen Yu wrote:
> > > > > > Use the helper functions introduced previously to encrypt
> > > > > > the page data before they are submitted to the block device.
> > > > > > Besides, for the case of hibernation compression, the data
> > > > > > are firstly compressed and then encrypted, and vice versa
> > > > > > for the resume process.
> > > > > >
> > > > >
> > > > > I want to suggest my solution that it direct signs/encrypts the
> > > > > memory snapshot image. This solution is already shipped with
> > > > > SLE12 a couple of years:
> > > > >
> > > > > https://github.com/joeyli/linux-s4sign/commits/s4sign-hmac-encrypted-key-v0.2-v4.17-rc3
> > > > >
> > > > I did not see image page encryption in above link, if I understand
> > >
> > > PM / hibernate: Generate and verify signature for snapshot image
> > > https://github.com/joeyli/linux-s4sign/commit/bae39460393ada4c0226dd07cd5e3afcef86b71f
> > >
> > > PM / hibernate: snapshot image encryption
> > > https://github.com/joeyli/linux-s4sign/commit/6a9a0113bb221c036ebd0f6321b7191283fe4929
> > >
> > > The above patches sign and encrypt the data pages in snapshot image.
> > > It puts the signature to header.
> > >
> > It looks like your signature code can be simplyly added on top of the
> > solution we proposed here, how about we collaborating on this task?
>
> OK, I will base on your user key solution to respin my signature patches.
>
> > just my 2 cents,
> > 1. The cryption code should be indepent of the snapshot code, and
> > this is why we implement it as a kernel module for that in PATCH[1/3].
>
> Why the cryption code must be indepent of snapshot code?
>
Modules can be easier to be maintained and customized/improved in the future IMO..
> > 2. There's no need to traverse the snapshot image twice, if the
> > image is large(there's requirement on servers now) we can
> > simplyly do the encryption before the disk IO, and this is
> > why PATCH[2/3] looks like this.
>
> If the encryption solution is only for block device, then the uswsusp
> interface must be locked-down when kernel is in locked mode:
>
> uswsusp: Disable when the kernel is locked down
> https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/commit/?h=lockdown-20180410&id=8732c1663d7c0305ae01ba5a1ee4d2299b7b4612
>
> I still suggest to keep the solution to direct encript the snapshot
> image for uswsusp because the snapshot can be encrypted by kernel
> before user space get it.
>
> I mean that if the uswsusp be used, then kernel direct encrypts the
> snapshot image, otherwise kernel encrypts pages before block io.
>
I did not quite get the point, if the kernel has been locked down,
then the uswsusp is disabled, why the kernel encrypts the snapshot
for uswsusp?
> On the other hand, I have a question about asynchronous block io.
> Please see below...
>
Okay.
> > > > > > Suggested-by: Rafael J. Wysocki <[email protected]>
> > > > > > Cc: Rafael J. Wysocki <[email protected]>
> > > > > > Cc: Pavel Machek <[email protected]>
> > > > > > Cc: Len Brown <[email protected]>
> > > > > > Cc: Borislav Petkov <[email protected]>
> > > > > > Cc: "Lee, Chun-Yi" <[email protected]>
> > > > > > Cc: [email protected]
> > > > > > Cc: [email protected]
> > > > > > Signed-off-by: Chen Yu <[email protected]>
> > > > > > ---
> > > > > > kernel/power/power.h | 1 +
> > > > > > kernel/power/swap.c | 215 ++++++++++++++++++++++++++++++++++++++++++++++++---
> > > > > > 2 files changed, 205 insertions(+), 11 deletions(-)
> [...snip]
> > > > > > /* kernel/power/hibernate.c */
> > > > > > extern int swsusp_check(void);
> > > > > > diff --git a/kernel/power/swap.c b/kernel/power/swap.c
> > > > > > index c2bcf97..2b6b3d0 100644
> > > > > > --- a/kernel/power/swap.c
> > > > > > +++ b/kernel/power/swap.c
> [...snip]
> > > > > > @@ -1069,18 +1171,42 @@ static int load_image(struct swap_map_handle *handle,
> > > > > > if (!m)
> > > > > > m = 1;
> > > > > > nr_pages = 0;
> > > > > > + crypto_page_idx = 0;
> > > > > > + if (handle->crypto) {
> > > > > > + crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
> > > > > > + if (!crypt_buf)
> > > > > > + return -ENOMEM;
> > > > > > + }
> > > > > > start = ktime_get();
> > > > > > for ( ; ; ) {
> > > > > > ret = snapshot_write_next(snapshot);
> > > > > > if (ret <= 0)
> > > > > > break;
> > > > > > - ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > > > > > + if (handle->crypto)
> > > > > > + ret = swap_read_page(handle, crypt_buf, &hb);
> > > > > > + else
> > > > > > + ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > > > > > if (ret)
> > > > > > break;
> > > > > > if (snapshot->sync_read)
> > > > > > ret = hib_wait_io(&hb);
>
> In snapshot_write_next(), the logic will clean the snapshot->sync_read
> when the buffer page doesn't equal to the original page. Which means
> that the page can be read by asynchronous block io. Otherwise, kernel
> calls hib_wait_io() to wait until the block io was done.
>
Yes, you are right, I missed the asynchronous block io in non-compression
case.
> > > > > > if (ret)
> > > > > > break;
> > > > > > + if (handle->crypto) {
> > > > > > + /*
> > > > > > + * Need a decryption for the
> > > > > > + * data read from the block
> > > > > > + * device.
> > > > > > + */
> > > > > > + ret = crypto_data(crypt_buf, PAGE_SIZE,
> > > > > > + data_of(*snapshot),
> > > > > > + PAGE_SIZE,
> > > > > > + false,
> > > > > > + crypto_page_idx);
> > > > > > + if (ret)
> > > > > > + break;
> > > > > > + crypto_page_idx++;
> > > > > > + }
>
> The decryption is here in the for-loop. But maybe the page is still in
> the block io queue for waiting the batch read? The page content is not
> really read to memory when the crypto_data be run?
>
Yes, it is possible.
> > > > > > if (!(nr_pages % m))
> > > > > > pr_info("Image loading progress: %3d%%\n",
> > > > > > nr_pages / m * 10);
> nr_pages++;
> }
> err2 = hib_wait_io(&hb);
> stop = ktime_get();
>
> When the for-loop is break, the above hib_wait_io(&hb) guarantees that
> all asynchronous block io are done. Then all pages are read to memory.
>
> I think that the decryption code must be moved after for-loop be break.
> Or there have any callback hook in the asynchronous block io that we
> can put the encryption code after the block io read the page.
>
Yes, we can move the decryptino code here. Another thought came to my
mind, how about disabling the asynchronous block io in non-compression
case if encryption hibernation is enabled(because encryption hibernation
is not using the asynchronous block io and it has to traverse each page
one by one, so asynchronous block io does not bring benefit to encryption
hibernation in non-compression case)? Or do I miss something?

Best,
Yu
> Thanks a lot!
> Joey Lee

2018-07-12 10:12:50

by joeyli

[permalink] [raw]
Subject: Re: [PATCH 2/3][RFC] PM / Hibernate: Encrypt the snapshot pages before submitted to the block device

Hi Yu Chen,

Sorry for my delay...

On Fri, Jul 06, 2018 at 11:28:56PM +0800, Yu Chen wrote:
> Hi Joey Lee,
> On Fri, Jun 29, 2018 at 08:59:43PM +0800, joeyli wrote:
> > On Thu, Jun 28, 2018 at 10:52:07PM +0800, Yu Chen wrote:
> > > Hi,
> > > On Thu, Jun 28, 2018 at 10:28:56PM +0800, joeyli wrote:
> > > > On Thu, Jun 28, 2018 at 09:50:17PM +0800, Yu Chen wrote:
> > > > > Hi,
> > > > > On Thu, Jun 28, 2018 at 09:07:20PM +0800, joeyli wrote:
> > > > > > Hi Chen Yu,
> > > > > >
> > > > > > On Wed, Jun 20, 2018 at 05:40:32PM +0800, Chen Yu wrote:
> > > > > > > Use the helper functions introduced previously to encrypt
> > > > > > > the page data before they are submitted to the block device.
> > > > > > > Besides, for the case of hibernation compression, the data
> > > > > > > are firstly compressed and then encrypted, and vice versa
> > > > > > > for the resume process.
> > > > > > >
> > > > > >
> > > > > > I want to suggest my solution that it direct signs/encrypts the
> > > > > > memory snapshot image. This solution is already shipped with
> > > > > > SLE12 a couple of years:
> > > > > >
> > > > > > https://github.com/joeyli/linux-s4sign/commits/s4sign-hmac-encrypted-key-v0.2-v4.17-rc3
> > > > > >
> > > > > I did not see image page encryption in above link, if I understand
> > > >
> > > > PM / hibernate: Generate and verify signature for snapshot image
> > > > https://github.com/joeyli/linux-s4sign/commit/bae39460393ada4c0226dd07cd5e3afcef86b71f
> > > >
> > > > PM / hibernate: snapshot image encryption
> > > > https://github.com/joeyli/linux-s4sign/commit/6a9a0113bb221c036ebd0f6321b7191283fe4929
> > > >
> > > > The above patches sign and encrypt the data pages in snapshot image.
> > > > It puts the signature to header.
> > > >
> > > It looks like your signature code can be simplyly added on top of the
> > > solution we proposed here, how about we collaborating on this task?
> >
> > OK, I will base on your user key solution to respin my signature patches.
> >
> > > just my 2 cents,
> > > 1. The cryption code should be indepent of the snapshot code, and
> > > this is why we implement it as a kernel module for that in PATCH[1/3].
> >
> > Why the cryption code must be indepent of snapshot code?
> >
> Modules can be easier to be maintained and customized/improved in the future IMO..

hm... currently I didn't see apparent benefit on this...

About modularization, is it possible to separate the key handler code
from crypto_hibernation.c? Otherwise the snapshot signature needs
to depend on encrypt function.

> > > 2. There's no need to traverse the snapshot image twice, if the
> > > image is large(there's requirement on servers now) we can
> > > simplyly do the encryption before the disk IO, and this is
> > > why PATCH[2/3] looks like this.
> >
> > If the encryption solution is only for block device, then the uswsusp
> > interface must be locked-down when kernel is in locked mode:
> >
> > uswsusp: Disable when the kernel is locked down
> > https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/commit/?h=lockdown-20180410&id=8732c1663d7c0305ae01ba5a1ee4d2299b7b4612
> >
> > I still suggest to keep the solution to direct encript the snapshot
> > image for uswsusp because the snapshot can be encrypted by kernel
> > before user space get it.
> >
> > I mean that if the uswsusp be used, then kernel direct encrypts the
> > snapshot image, otherwise kernel encrypts pages before block io.
> >
> I did not quite get the point, if the kernel has been locked down,
> then the uswsusp is disabled, why the kernel encrypts the snapshot
> for uswsusp?

There have some functions be locked-down because there have no
appropriate mechanisms to check the integrity of writing data. If
the snapshot image can be encrypted and authentication, then the
uswsusp interface is avaiable for userspace. We do not need to lock
it down.

> > On the other hand, I have a question about asynchronous block io.
> > Please see below...
> >
> Okay.
> > > > > > > Suggested-by: Rafael J. Wysocki <[email protected]>
> > > > > > > Cc: Rafael J. Wysocki <[email protected]>
> > > > > > > Cc: Pavel Machek <[email protected]>
> > > > > > > Cc: Len Brown <[email protected]>
> > > > > > > Cc: Borislav Petkov <[email protected]>
> > > > > > > Cc: "Lee, Chun-Yi" <[email protected]>
> > > > > > > Cc: [email protected]
> > > > > > > Cc: [email protected]
> > > > > > > Signed-off-by: Chen Yu <[email protected]>
> > > > > > > ---
> > > > > > > kernel/power/power.h | 1 +
> > > > > > > kernel/power/swap.c | 215 ++++++++++++++++++++++++++++++++++++++++++++++++---
> > > > > > > 2 files changed, 205 insertions(+), 11 deletions(-)
> > [...snip]
> > > > > > > /* kernel/power/hibernate.c */
> > > > > > > extern int swsusp_check(void);
> > > > > > > diff --git a/kernel/power/swap.c b/kernel/power/swap.c
> > > > > > > index c2bcf97..2b6b3d0 100644
> > > > > > > --- a/kernel/power/swap.c
> > > > > > > +++ b/kernel/power/swap.c
> > [...snip]
> > > > > > > @@ -1069,18 +1171,42 @@ static int load_image(struct swap_map_handle *handle,
> > > > > > > if (!m)
> > > > > > > m = 1;
> > > > > > > nr_pages = 0;
> > > > > > > + crypto_page_idx = 0;
> > > > > > > + if (handle->crypto) {
> > > > > > > + crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
> > > > > > > + if (!crypt_buf)
> > > > > > > + return -ENOMEM;
> > > > > > > + }
> > > > > > > start = ktime_get();
> > > > > > > for ( ; ; ) {
> > > > > > > ret = snapshot_write_next(snapshot);
> > > > > > > if (ret <= 0)
> > > > > > > break;
> > > > > > > - ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > > > > > > + if (handle->crypto)
> > > > > > > + ret = swap_read_page(handle, crypt_buf, &hb);
> > > > > > > + else
> > > > > > > + ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > > > > > > if (ret)
> > > > > > > break;
> > > > > > > if (snapshot->sync_read)
> > > > > > > ret = hib_wait_io(&hb);
> >
> > In snapshot_write_next(), the logic will clean the snapshot->sync_read
> > when the buffer page doesn't equal to the original page. Which means
> > that the page can be read by asynchronous block io. Otherwise, kernel
> > calls hib_wait_io() to wait until the block io was done.
> >
> Yes, you are right, I missed the asynchronous block io in non-compression
> case.
> > > > > > > if (ret)
> > > > > > > break;
> > > > > > > + if (handle->crypto) {
> > > > > > > + /*
> > > > > > > + * Need a decryption for the
> > > > > > > + * data read from the block
> > > > > > > + * device.
> > > > > > > + */
> > > > > > > + ret = crypto_data(crypt_buf, PAGE_SIZE,
> > > > > > > + data_of(*snapshot),
> > > > > > > + PAGE_SIZE,
> > > > > > > + false,
> > > > > > > + crypto_page_idx);
> > > > > > > + if (ret)
> > > > > > > + break;
> > > > > > > + crypto_page_idx++;
> > > > > > > + }
> >
> > The decryption is here in the for-loop. But maybe the page is still in
> > the block io queue for waiting the batch read? The page content is not
> > really read to memory when the crypto_data be run?
> >
> Yes, it is possible.
> > > > > > > if (!(nr_pages % m))
> > > > > > > pr_info("Image loading progress: %3d%%\n",
> > > > > > > nr_pages / m * 10);
> > nr_pages++;
> > }
> > err2 = hib_wait_io(&hb);
> > stop = ktime_get();
> >
> > When the for-loop is break, the above hib_wait_io(&hb) guarantees that
> > all asynchronous block io are done. Then all pages are read to memory.
> >
> > I think that the decryption code must be moved after for-loop be break.
> > Or there have any callback hook in the asynchronous block io that we
> > can put the encryption code after the block io read the page.
> >
> Yes, we can move the decryptino code here. Another thought came to my
> mind, how about disabling the asynchronous block io in non-compression
> case if encryption hibernation is enabled(because encryption hibernation
> is not using the asynchronous block io and it has to traverse each page
> one by one, so asynchronous block io does not bring benefit to encryption
> hibernation in non-compression case)? Or do I miss something?
>

Disabling asynchronous block will cause IO performance drops. As I remeber
that it's a lot of drops. You can try it. I am not sure how many people
are still using non-compression mode.

I do not know too much on block layer API. Is it possible to encrypt each page
before writing to device from kernel?

Thanks
Joey Lee

2018-07-13 07:30:05

by Chen Yu

[permalink] [raw]
Subject: Re: [PATCH 2/3][RFC] PM / Hibernate: Encrypt the snapshot pages before submitted to the block device

Hi,
On Thu, Jul 12, 2018 at 06:10:37PM +0800, joeyli wrote:
> Hi Yu Chen,
>
> Sorry for my delay...
>
> On Fri, Jul 06, 2018 at 11:28:56PM +0800, Yu Chen wrote:
> > Hi Joey Lee,
> > On Fri, Jun 29, 2018 at 08:59:43PM +0800, joeyli wrote:
> > > On Thu, Jun 28, 2018 at 10:52:07PM +0800, Yu Chen wrote:
> > > > Hi,
> > > > On Thu, Jun 28, 2018 at 10:28:56PM +0800, joeyli wrote:
> > > > > On Thu, Jun 28, 2018 at 09:50:17PM +0800, Yu Chen wrote:
> > > > > > Hi,
> > > > > > On Thu, Jun 28, 2018 at 09:07:20PM +0800, joeyli wrote:
> > > > > > > Hi Chen Yu,
> > > > > > >
> > > > > > > On Wed, Jun 20, 2018 at 05:40:32PM +0800, Chen Yu wrote:
> > > > > > > > Use the helper functions introduced previously to encrypt
> > > > > > > > the page data before they are submitted to the block device.
> > > > > > > > Besides, for the case of hibernation compression, the data
> > > > > > > > are firstly compressed and then encrypted, and vice versa
> > > > > > > > for the resume process.
> > > > > > > >
> > > > > > >
> > > > > > > I want to suggest my solution that it direct signs/encrypts the
> > > > > > > memory snapshot image. This solution is already shipped with
> > > > > > > SLE12 a couple of years:
> > > > > > >
> > > > > > > https://github.com/joeyli/linux-s4sign/commits/s4sign-hmac-encrypted-key-v0.2-v4.17-rc3
> > > > > > >
> > > > > > I did not see image page encryption in above link, if I understand
> > > > >
> > > > > PM / hibernate: Generate and verify signature for snapshot image
> > > > > https://github.com/joeyli/linux-s4sign/commit/bae39460393ada4c0226dd07cd5e3afcef86b71f
> > > > >
> > > > > PM / hibernate: snapshot image encryption
> > > > > https://github.com/joeyli/linux-s4sign/commit/6a9a0113bb221c036ebd0f6321b7191283fe4929
> > > > >
> > > > > The above patches sign and encrypt the data pages in snapshot image.
> > > > > It puts the signature to header.
> > > > >
> > > > It looks like your signature code can be simplyly added on top of the
> > > > solution we proposed here, how about we collaborating on this task?
> > >
> > > OK, I will base on your user key solution to respin my signature patches.
> > >
> > > > just my 2 cents,
> > > > 1. The cryption code should be indepent of the snapshot code, and
> > > > this is why we implement it as a kernel module for that in PATCH[1/3].
> > >
> > > Why the cryption code must be indepent of snapshot code?
> > >
> > Modules can be easier to be maintained and customized/improved in the future IMO..
>
> hm... currently I didn't see apparent benefit on this...
>
> About modularization, is it possible to separate the key handler code
> from crypto_hibernation.c? Otherwise the snapshot signature needs
> to depend on encrypt function.
>
I understand.
It seems that we can reuse crypto_data() for the signature logic.
For example, add one parameter for crypto_data(..., crypto_mode)
the crypto_mode is a combination of ENCRYPT/DECRYPT/SIGNATURE_START/SIGNATURE_END,
and can be controled by sysfs. If SIGNATURE_START is enabled, the crypto_data()
invokes crypto_shash_update() to get "hmac(sha512)" hash, and if SIGNATURE_END
is enabled, crypto_shash_final() stores the final result in global buffer
struct hibernation_crypto.keys.digest[SNAPSHOT_DIGEST_SIZE] which can be
passed to the restore kernel, the same as the salt. Besides,
the swsusp_prepare_hash() in your code could be moved into
init_crypto_helper(), thus the signature key could also reuse
the same key passed from user space or via get_efi_secret_key().
In this way, the change in snapshot.c is minimal and the crypto
facilities could be maintained in one file.
> > > > 2. There's no need to traverse the snapshot image twice, if the
> > > > image is large(there's requirement on servers now) we can
> > > > simplyly do the encryption before the disk IO, and this is
> > > > why PATCH[2/3] looks like this.
> > >
> > > If the encryption solution is only for block device, then the uswsusp
> > > interface must be locked-down when kernel is in locked mode:
> > >
> > > uswsusp: Disable when the kernel is locked down
> > > https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/commit/?h=lockdown-20180410&id=8732c1663d7c0305ae01ba5a1ee4d2299b7b4612
> > >
> > > I still suggest to keep the solution to direct encript the snapshot
> > > image for uswsusp because the snapshot can be encrypted by kernel
> > > before user space get it.
> > >
> > > I mean that if the uswsusp be used, then kernel direct encrypts the
> > > snapshot image, otherwise kernel encrypts pages before block io.
> > >
> > I did not quite get the point, if the kernel has been locked down,
> > then the uswsusp is disabled, why the kernel encrypts the snapshot
> > for uswsusp?
>
> There have some functions be locked-down because there have no
> appropriate mechanisms to check the integrity of writing data. If
> the snapshot image can be encrypted and authentication, then the
> uswsusp interface is avaiable for userspace. We do not need to lock
> it down.
Ok, I got your point. To be honest, I like your implementation to treat
the snapshot data seperately except that it might cause small overhead
when traversing large number of snapshot and make snapshot logic a little
coupling with crypto logic. Let me send our v2 using the crypto-before-blockio
for review and maybe change to your solution using the encapsulated APIs in
crypto_hibernate.c.
>
> > > On the other hand, I have a question about asynchronous block io.
> > > Please see below...
> > >
> > Okay.
> > > > > > > > Suggested-by: Rafael J. Wysocki <[email protected]>
> > > > > > > > Cc: Rafael J. Wysocki <[email protected]>
> > > > > > > > Cc: Pavel Machek <[email protected]>
> > > > > > > > Cc: Len Brown <[email protected]>
> > > > > > > > Cc: Borislav Petkov <[email protected]>
> > > > > > > > Cc: "Lee, Chun-Yi" <[email protected]>
> > > > > > > > Cc: [email protected]
> > > > > > > > Cc: [email protected]
> > > > > > > > Signed-off-by: Chen Yu <[email protected]>
> > > > > > > > ---
> > > > > > > > kernel/power/power.h | 1 +
> > > > > > > > kernel/power/swap.c | 215 ++++++++++++++++++++++++++++++++++++++++++++++++---
> > > > > > > > 2 files changed, 205 insertions(+), 11 deletions(-)
> > > [...snip]
> > > > > > > > /* kernel/power/hibernate.c */
> > > > > > > > extern int swsusp_check(void);
> > > > > > > > diff --git a/kernel/power/swap.c b/kernel/power/swap.c
> > > > > > > > index c2bcf97..2b6b3d0 100644
> > > > > > > > --- a/kernel/power/swap.c
> > > > > > > > +++ b/kernel/power/swap.c
> > > [...snip]
> > > > > > > > @@ -1069,18 +1171,42 @@ static int load_image(struct swap_map_handle *handle,
> > > > > > > > if (!m)
> > > > > > > > m = 1;
> > > > > > > > nr_pages = 0;
> > > > > > > > + crypto_page_idx = 0;
> > > > > > > > + if (handle->crypto) {
> > > > > > > > + crypt_buf = (void *)get_zeroed_page(GFP_KERNEL);
> > > > > > > > + if (!crypt_buf)
> > > > > > > > + return -ENOMEM;
> > > > > > > > + }
> > > > > > > > start = ktime_get();
> > > > > > > > for ( ; ; ) {
> > > > > > > > ret = snapshot_write_next(snapshot);
> > > > > > > > if (ret <= 0)
> > > > > > > > break;
> > > > > > > > - ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > > > > > > > + if (handle->crypto)
> > > > > > > > + ret = swap_read_page(handle, crypt_buf, &hb);
> > > > > > > > + else
> > > > > > > > + ret = swap_read_page(handle, data_of(*snapshot), &hb);
> > > > > > > > if (ret)
> > > > > > > > break;
> > > > > > > > if (snapshot->sync_read)
> > > > > > > > ret = hib_wait_io(&hb);
> > >
> > > In snapshot_write_next(), the logic will clean the snapshot->sync_read
> > > when the buffer page doesn't equal to the original page. Which means
> > > that the page can be read by asynchronous block io. Otherwise, kernel
> > > calls hib_wait_io() to wait until the block io was done.
> > >
> > Yes, you are right, I missed the asynchronous block io in non-compression
> > case.
> > > > > > > > if (ret)
> > > > > > > > break;
> > > > > > > > + if (handle->crypto) {
> > > > > > > > + /*
> > > > > > > > + * Need a decryption for the
> > > > > > > > + * data read from the block
> > > > > > > > + * device.
> > > > > > > > + */
> > > > > > > > + ret = crypto_data(crypt_buf, PAGE_SIZE,
> > > > > > > > + data_of(*snapshot),
> > > > > > > > + PAGE_SIZE,
> > > > > > > > + false,
> > > > > > > > + crypto_page_idx);
> > > > > > > > + if (ret)
> > > > > > > > + break;
> > > > > > > > + crypto_page_idx++;
> > > > > > > > + }
> > >
> > > The decryption is here in the for-loop. But maybe the page is still in
> > > the block io queue for waiting the batch read? The page content is not
> > > really read to memory when the crypto_data be run?
> > >
> > Yes, it is possible.
> > > > > > > > if (!(nr_pages % m))
> > > > > > > > pr_info("Image loading progress: %3d%%\n",
> > > > > > > > nr_pages / m * 10);
> > > nr_pages++;
> > > }
> > > err2 = hib_wait_io(&hb);
> > > stop = ktime_get();
> > >
> > > When the for-loop is break, the above hib_wait_io(&hb) guarantees that
> > > all asynchronous block io are done. Then all pages are read to memory.
> > >
> > > I think that the decryption code must be moved after for-loop be break.
> > > Or there have any callback hook in the asynchronous block io that we
> > > can put the encryption code after the block io read the page.
> > >
> > Yes, we can move the decryptino code here. Another thought came to my
> > mind, how about disabling the asynchronous block io in non-compression
> > case if encryption hibernation is enabled(because encryption hibernation
> > is not using the asynchronous block io and it has to traverse each page
> > one by one, so asynchronous block io does not bring benefit to encryption
> > hibernation in non-compression case)? Or do I miss something?
> >
>
> Disabling asynchronous block will cause IO performance drops. As I remeber
> that it's a lot of drops. You can try it. I am not sure how many people
> are still using non-compression mode.
Let me compare the speed of using non-compression mode.
>
> I do not know too much on block layer API. Is it possible to encrypt each page
> before writing to device from kernel?
It might be hard due to the fact that the IO scheduler might combine different
IO request together and we could not figure out which IO request sections should
be encrypted.
>
> Thanks
> Joey Lee

2018-07-18 15:49:40

by joeyli

[permalink] [raw]
Subject: Re: [PATCH 2/3][RFC] PM / Hibernate: Encrypt the snapshot pages before submitted to the block device

On Fri, Jul 13, 2018 at 03:34:25PM +0800, Yu Chen wrote:
> Hi,
> On Thu, Jul 12, 2018 at 06:10:37PM +0800, joeyli wrote:
> > Hi Yu Chen,
> >
> > Sorry for my delay...
> >
> > On Fri, Jul 06, 2018 at 11:28:56PM +0800, Yu Chen wrote:
[...snip]
> > > > Why the cryption code must be indepent of snapshot code?
> > > >
> > > Modules can be easier to be maintained and customized/improved in the future IMO..
> >
> > hm... currently I didn't see apparent benefit on this...
> >
> > About modularization, is it possible to separate the key handler code
> > from crypto_hibernation.c? Otherwise the snapshot signature needs
> > to depend on encrypt function.
> >
> I understand.
> It seems that we can reuse crypto_data() for the signature logic.
> For example, add one parameter for crypto_data(..., crypto_mode)
> the crypto_mode is a combination of ENCRYPT/DECRYPT/SIGNATURE_START/SIGNATURE_END,
> and can be controled by sysfs. If SIGNATURE_START is enabled, the crypto_data()
> invokes crypto_shash_update() to get "hmac(sha512)" hash, and if SIGNATURE_END
> is enabled, crypto_shash_final() stores the final result in global buffer

I agree that the swsusp_prepare and hmac-hash codes can be extract to
crypto_hibernation.

> struct hibernation_crypto.keys.digest[SNAPSHOT_DIGEST_SIZE] which can be
> passed to the restore kernel, the same as the salt. Besides,

The salt is stored in the swsusp_header and put in swap. What I want
is that the signature of snapshot should be keep in the snapshot header
as my original design in patch. The benefit is that the snapshot with
signature can be stored to any place but not just swap. The user space
can take snapshot image with signature to anywhere.

> the swsusp_prepare_hash() in your code could be moved into
> init_crypto_helper(), thus the signature key could also reuse
> the same key passed from user space or via get_efi_secret_key().
> In this way, the change in snapshot.c is minimal and the crypto
> facilities could be maintained in one file.

I agree. But as I mentioned in that mail, the key handler codes
in hibernation_crypto() should be extract to another C file to avoid
the logic is mixed with the crypto code. The benefit is that we can
add more key sources like encrypted key or EFI key in the future.

> > > > > 2. There's no need to traverse the snapshot image twice, if the
> > > > > image is large(there's requirement on servers now) we can
> > > > > simplyly do the encryption before the disk IO, and this is
> > > > > why PATCH[2/3] looks like this.
> > > >
> > > > If the encryption solution is only for block device, then the uswsusp
> > > > interface must be locked-down when kernel is in locked mode:
> > > >
> > > > uswsusp: Disable when the kernel is locked down
> > > > https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/commit/?h=lockdown-20180410&id=8732c1663d7c0305ae01ba5a1ee4d2299b7b4612
> > > >
> > > > I still suggest to keep the solution to direct encript the snapshot
> > > > image for uswsusp because the snapshot can be encrypted by kernel
> > > > before user space get it.
> > > >
> > > > I mean that if the uswsusp be used, then kernel direct encrypts the
> > > > snapshot image, otherwise kernel encrypts pages before block io.
> > > >
> > > I did not quite get the point, if the kernel has been locked down,
> > > then the uswsusp is disabled, why the kernel encrypts the snapshot
> > > for uswsusp?
> >
> > There have some functions be locked-down because there have no
> > appropriate mechanisms to check the integrity of writing data. If
> > the snapshot image can be encrypted and authentication, then the
> > uswsusp interface is avaiable for userspace. We do not need to lock
> > it down.
> Ok, I got your point. To be honest, I like your implementation to treat
> the snapshot data seperately except that it might cause small overhead
> when traversing large number of snapshot and make snapshot logic a little
> coupling with crypto logic. Let me send our v2 using the crypto-before-blockio
> for review and maybe change to your solution using the encapsulated APIs in
> crypto_hibernate.c.

Because sometimes I stick on other topics...

If you are urgent for pushing your encryption solution. Please do not
consider too much on signature check. Just complete your patches and push
to kernel mainline. I will follow your result to respin my signature work.

Thanks
Joey Lee

2018-07-19 09:11:54

by Chen Yu

[permalink] [raw]
Subject: Re: [PATCH 2/3][RFC] PM / Hibernate: Encrypt the snapshot pages before submitted to the block device

On Wed, Jul 18, 2018 at 11:48:19PM +0800, joeyli wrote:
> On Fri, Jul 13, 2018 at 03:34:25PM +0800, Yu Chen wrote:
> > Hi,
> > On Thu, Jul 12, 2018 at 06:10:37PM +0800, joeyli wrote:
> > > Hi Yu Chen,
> > >
> > > Sorry for my delay...
> > >
> > > On Fri, Jul 06, 2018 at 11:28:56PM +0800, Yu Chen wrote:
> [...snip]
> > > > > Why the cryption code must be indepent of snapshot code?
> > > > >
> > > > Modules can be easier to be maintained and customized/improved in the future IMO..
> > >
> > > hm... currently I didn't see apparent benefit on this...
> > >
> > > About modularization, is it possible to separate the key handler code
> > > from crypto_hibernation.c? Otherwise the snapshot signature needs
> > > to depend on encrypt function.
> > >
> > I understand.
> > It seems that we can reuse crypto_data() for the signature logic.
> > For example, add one parameter for crypto_data(..., crypto_mode)
> > the crypto_mode is a combination of ENCRYPT/DECRYPT/SIGNATURE_START/SIGNATURE_END,
> > and can be controled by sysfs. If SIGNATURE_START is enabled, the crypto_data()
> > invokes crypto_shash_update() to get "hmac(sha512)" hash, and if SIGNATURE_END
> > is enabled, crypto_shash_final() stores the final result in global buffer
>
> I agree that the swsusp_prepare and hmac-hash codes can be extract to
> crypto_hibernation.
>
> > struct hibernation_crypto.keys.digest[SNAPSHOT_DIGEST_SIZE] which can be
> > passed to the restore kernel, the same as the salt. Besides,
>
> The salt is stored in the swsusp_header and put in swap. What I want
> is that the signature of snapshot should be keep in the snapshot header
> as my original design in patch. The benefit is that the snapshot with
> signature can be stored to any place but not just swap. The user space
> can take snapshot image with signature to anywhere.
>
Okay, got it.
> > the swsusp_prepare_hash() in your code could be moved into
> > init_crypto_helper(), thus the signature key could also reuse
> > the same key passed from user space or via get_efi_secret_key().
> > In this way, the change in snapshot.c is minimal and the crypto
> > facilities could be maintained in one file.
>
> I agree. But as I mentioned in that mail, the key handler codes
> in hibernation_crypto() should be extract to another C file to avoid
> the logic is mixed with the crypto code. The benefit is that we can
> add more key sources like encrypted key or EFI key in the future.
>
Ok, will extract the key handler code from this file.
> > > > > > 2. There's no need to traverse the snapshot image twice, if the
> > > > > > image is large(there's requirement on servers now) we can
> > > > > > simplyly do the encryption before the disk IO, and this is
> > > > > > why PATCH[2/3] looks like this.
> > > > >
> > > > > If the encryption solution is only for block device, then the uswsusp
> > > > > interface must be locked-down when kernel is in locked mode:
> > > > >
> > > > > uswsusp: Disable when the kernel is locked down
> > > > > https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/commit/?h=lockdown-20180410&id=8732c1663d7c0305ae01ba5a1ee4d2299b7b4612
> > > > >
> > > > > I still suggest to keep the solution to direct encript the snapshot
> > > > > image for uswsusp because the snapshot can be encrypted by kernel
> > > > > before user space get it.
> > > > >
> > > > > I mean that if the uswsusp be used, then kernel direct encrypts the
> > > > > snapshot image, otherwise kernel encrypts pages before block io.
> > > > >
> > > > I did not quite get the point, if the kernel has been locked down,
> > > > then the uswsusp is disabled, why the kernel encrypts the snapshot
> > > > for uswsusp?
> > >
> > > There have some functions be locked-down because there have no
> > > appropriate mechanisms to check the integrity of writing data. If
> > > the snapshot image can be encrypted and authentication, then the
> > > uswsusp interface is avaiable for userspace. We do not need to lock
> > > it down.
> > Ok, I got your point. To be honest, I like your implementation to treat
> > the snapshot data seperately except that it might cause small overhead
> > when traversing large number of snapshot and make snapshot logic a little
> > coupling with crypto logic. Let me send our v2 using the crypto-before-blockio
> > for review and maybe change to your solution using the encapsulated APIs in
> > crypto_hibernate.c.
>
> Because sometimes I stick on other topics...
>
> If you are urgent for pushing your encryption solution. Please do not
> consider too much on signature check. Just complete your patches and push
> to kernel mainline. I will follow your result to respin my signature work.
>
Thanks, I've just sent out another version before I noticed your
comment yesterday, let me address your concern in v3,
but please feel free to comment on v2.
Thanks,
Yu

> Thanks
> Joey Lee