2021-05-14 20:51:20

by Chang S. Bae

[permalink] [raw]
Subject: [RFC PATCH v2 00/11] x86: Support Intel Key Locker

Key Locker [1][2] is a new security feature available in new Intel CPUs to
protect data encryption keys for the Advanced Encryption Standard
algorithm. The protection limits the amount of time an AES key is exposed
in memory by sealing a key and referencing it with new AES instructions.

The new AES instruction set is a successor of Intel's AES-NI (AES New
Instruction). Users may switch to the Key Locker version from crypto
libraries. This series includes a new AES implementation for the Crypto
API, which was validated through the crypto unit tests. The performance in
the test cases was measured and found comparable to the AES-NI version.

Key Locker introduces a (CPU-)internal key to encode AES keys. The kernel
needs to load it and ensure it unchanged as long as CPUs are operational.

The series has three parts:
* PATCH1-7: Implement the internal key management
* PATCH8-10: Add a new AES implementation in the Crypto library
* PATCH11: Provide the hardware randomization option

Dan has asked for this to go out as another RFC to have more conversation
before asking the maintainers to consider this implementation.

Changes from RFC v1 [3]:
* Refactored the AES-NI implementation and fix the AES-KL in the Crypto
API. (Ard Biesheuvel)
* Revised the AES implementation description. (Dave Hansen and Peter
Zijlsta).
* Noted the binutils version and made it prerequisite. (Peter Zijlstra and
Borislav Petkov)
* Reorganized helper functions. With that, simplified the feature
enablement check.
* Added to flush cache lines when removing key from memory.
* Separated the opcode map update into a new patch. Also, included AES
instructions.
* Refactored the LOADIWKEY instruction in a new helper.
* Folded the backup error warning. (Rafael Wysocki)
* Massaged changelog accordingly.

[1] Intel Architecture Instruction Set Extensions Programming Reference:
https://software.intel.com/content/dam/develop/external/us/en/documents-tps/architecture-instruction-set-extensions-programming-reference.pdf
[2] Intel Key Locker Specification:
https://software.intel.com/content/dam/develop/external/us/en/documents/343965-intel-key-locker-specification.pdf
[3] RFC v1: https://lore.kernel.org/lkml/[email protected]/

Chang S. Bae (11):
x86/cpufeature: Enumerate Key Locker feature
x86/insn: Add Key Locker instructions to the opcode map
x86/cpu: Load Key Locker internal key at boot-time
x86/msr-index: Add MSRs for Key Locker internal key
x86/power: Restore Key Locker internal key from the ACPI S3/4 sleep
states
x86/cpu: Add a config option and a chicken bit for Key Locker
selftests/x86: Test Key Locker internal key maintenance
crypto: x86/aes-ni - Improve error handling
crypto: x86/aes-ni - Refactor to prepare a new AES implementation
crypto: x86/aes-kl - Support AES algorithm using Key Locker
instructions
x86/cpu: Support the hardware randomization option for Key Locker
internal key

.../admin-guide/kernel-parameters.txt | 2 +
arch/x86/Kconfig | 14 +
arch/x86/crypto/Makefile | 5 +-
arch/x86/crypto/aes-intel_asm.S | 26 +
arch/x86/crypto/aes-intel_glue.c | 208 +++
arch/x86/crypto/aes-intel_glue.h | 61 +
arch/x86/crypto/aeskl-intel_asm.S | 1181 +++++++++++++++++
arch/x86/crypto/aeskl-intel_glue.c | 390 ++++++
arch/x86/crypto/aesni-intel_asm.S | 90 +-
arch/x86/crypto/aesni-intel_glue.c | 310 +----
arch/x86/crypto/aesni-intel_glue.h | 88 ++
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/disabled-features.h | 8 +-
arch/x86/include/asm/keylocker.h | 30 +
arch/x86/include/asm/msr-index.h | 6 +
arch/x86/include/asm/special_insns.h | 36 +
arch/x86/include/uapi/asm/processor-flags.h | 2 +
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/cpu/common.c | 21 +-
arch/x86/kernel/cpu/cpuid-deps.c | 1 +
arch/x86/kernel/keylocker.c | 264 ++++
arch/x86/kernel/smpboot.c | 2 +
arch/x86/lib/x86-opcode-map.txt | 11 +-
arch/x86/power/cpu.c | 2 +
crypto/Kconfig | 23 +
drivers/char/random.c | 6 +
include/linux/random.h | 2 +
tools/arch/x86/lib/x86-opcode-map.txt | 11 +-
tools/testing/selftests/x86/Makefile | 2 +-
tools/testing/selftests/x86/keylocker.c | 177 +++
30 files changed, 2638 insertions(+), 343 deletions(-)
create mode 100644 arch/x86/crypto/aes-intel_asm.S
create mode 100644 arch/x86/crypto/aes-intel_glue.c
create mode 100644 arch/x86/crypto/aes-intel_glue.h
create mode 100644 arch/x86/crypto/aeskl-intel_asm.S
create mode 100644 arch/x86/crypto/aeskl-intel_glue.c
create mode 100644 arch/x86/crypto/aesni-intel_glue.h
create mode 100644 arch/x86/include/asm/keylocker.h
create mode 100644 arch/x86/kernel/keylocker.c
create mode 100644 tools/testing/selftests/x86/keylocker.c


base-commit: 6efb943b8616ec53a5e444193dccf1af9ad627b5
--
2.17.1



2021-05-14 20:51:47

by Chang S. Bae

[permalink] [raw]
Subject: [RFC PATCH v2 07/11] selftests/x86: Test Key Locker internal key maintenance

The test validates the internal key to be the same in all CPUs.

It performs the validation again with the Suspend-To-RAM (ACPI S3) state.

Signed-off-by: Chang S. Bae <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
Changes from RFC v1:
* Commented the binutils version number for ENCODEKEY128 (Peter Zijlstra)
---
tools/testing/selftests/x86/Makefile | 2 +-
tools/testing/selftests/x86/keylocker.c | 177 ++++++++++++++++++++++++
2 files changed, 178 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/x86/keylocker.c

diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
index 333980375bc7..09237cc84108 100644
--- a/tools/testing/selftests/x86/Makefile
+++ b/tools/testing/selftests/x86/Makefile
@@ -13,7 +13,7 @@ CAN_BUILD_WITH_NOPIE := $(shell ./check_cc.sh $(CC) trivial_program.c -no-pie)
TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt test_mremap_vdso \
check_initial_reg_state sigreturn iopl ioperm \
test_vsyscall mov_ss_trap \
- syscall_arg_fault fsgsbase_restore
+ syscall_arg_fault fsgsbase_restore keylocker
TARGETS_C_32BIT_ONLY := entry_from_vm86 test_syscall_vdso unwind_vdso \
test_FCMOV test_FCOMI test_FISTTP \
vdso_restorer
diff --git a/tools/testing/selftests/x86/keylocker.c b/tools/testing/selftests/x86/keylocker.c
new file mode 100644
index 000000000000..78bbb7939b1a
--- /dev/null
+++ b/tools/testing/selftests/x86/keylocker.c
@@ -0,0 +1,177 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * keylocker.c, validate the internal key management.
+ */
+#undef _GNU_SOURCE
+#define _GNU_SOURCE 1
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <string.h>
+#include <fcntl.h>
+#include <err.h>
+#include <sched.h>
+#include <setjmp.h>
+#include <signal.h>
+#include <unistd.h>
+
+#define HANDLE_SIZE 48
+
+static bool keylocker_disabled;
+
+/* Encode a 128-bit key to a 384-bit handle */
+static inline void __encode_key(char *handle)
+{
+ static const unsigned char aeskey[] = { 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38,
+ 0x71, 0x77, 0x74, 0x69, 0x6f, 0x6b, 0x6c, 0x78 };
+
+ asm volatile ("movdqu %0, %%xmm0" : : "m" (*aeskey) :);
+
+ /* Set no restriction to the handle */
+ asm volatile ("mov $0, %%eax" :);
+
+ /* ENCODEKEY128 %EAX (supported by binutils >= 2.36) */
+ asm volatile (".byte 0xf3, 0xf, 0x38, 0xfa, 0xc0");
+
+ asm volatile ("movdqu %%xmm0, %0; movdqu %%xmm1, %1; movdqu %%xmm2, %2;"
+ : "=m" (handle[0]), "=m" (handle[0x10]), "=m" (handle[0x20]));
+}
+
+static jmp_buf jmpbuf;
+
+static void handle_sigill(int sig, siginfo_t *si, void *ctx_void)
+{
+ keylocker_disabled = true;
+ siglongjmp(jmpbuf, 1);
+}
+
+static bool encode_key(char *handle)
+{
+ bool success = true;
+ struct sigaction sa;
+ int ret;
+
+ memset(&sa, 0, sizeof(sa));
+
+ /* Set signal handler */
+ sa.sa_flags = SA_SIGINFO;
+ sa.sa_sigaction = handle_sigill;
+ sigemptyset(&sa.sa_mask);
+ ret = sigaction(SIGILL, &sa, 0);
+ if (ret)
+ err(1, "sigaction");
+
+ if (sigsetjmp(jmpbuf, 1))
+ success = false;
+ else
+ __encode_key(handle);
+
+ /* Clear signal handler */
+ sa.sa_flags = 0;
+ sa.sa_sigaction = NULL;
+ sa.sa_handler = SIG_DFL;
+ sigemptyset(&sa.sa_mask);
+ ret = sigaction(SIGILL, &sa, 0);
+ if (ret)
+ err(1, "sigaction");
+
+ return success;
+}
+
+/*
+ * Test if the internal key is the same in all the CPUs:
+ *
+ * Since the value is not readable, compare the encoded output of a AES key
+ * between CPUs.
+ */
+
+static int nerrs;
+
+static unsigned char cpu0_handle[HANDLE_SIZE] = { 0 };
+
+static void test_internal_key(bool slept, long cpus)
+{
+ int cpu, errs;
+
+ printf("Test the internal key consistency between CPUs\n");
+
+ for (cpu = 0, errs = 0; cpu < cpus; cpu++) {
+ char handle[HANDLE_SIZE] = { 0 };
+ cpu_set_t mask;
+ bool success;
+
+ CPU_ZERO(&mask);
+ CPU_SET(cpu, &mask);
+ sched_setaffinity(0, sizeof(cpu_set_t), &mask);
+
+ success = encode_key(handle);
+ if (!success) {
+ /* The encode should success after the S3 sleep */
+ if (slept)
+ errs++;
+ printf("[%s]\tKey Locker disabled at CPU%d\n",
+ slept ? "FAIL" : "NOTE", cpu);
+ continue;
+ }
+
+ if (cpu == 0 && !slept) {
+ /* Record the first handle value as reference */
+ memcpy(cpu0_handle, handle, HANDLE_SIZE);
+ } else if (memcmp(cpu0_handle, handle, HANDLE_SIZE)) {
+ printf("[FAIL]\tMismatched internal key at CPU%d\n",
+ cpu);
+ errs++;
+ }
+ }
+
+ if (errs == 0 && !keylocker_disabled)
+ printf("[OK]\tAll the internal keys are the same\n");
+ else
+ nerrs += errs;
+}
+
+static void switch_to_sleep(bool *slept)
+{
+ ssize_t bytes;
+ int fd;
+
+ printf("Transition to Suspend-To-RAM state\n");
+
+ fd = open("/sys/power/mem_sleep", O_RDWR);
+ if (fd < 0)
+ err(1, "Open /sys/power/mem_sleep");
+
+ bytes = write(fd, "deep", strlen("deep"));
+ if (bytes != strlen("deep"))
+ err(1, "Write /sys/power/mem_sleep");
+ close(fd);
+
+ fd = open("/sys/power/state", O_RDWR);
+ if (fd < 0)
+ err(1, "Open /sys/power/state");
+
+ bytes = write(fd, "mem", strlen("mem"));
+ if (bytes != strlen("mem"))
+ err(1, "Write /sys/power/state");
+ close(fd);
+
+ printf("Wake up from Suspend-To-RAM state\n");
+ *slept = true;
+}
+
+int main(void)
+{
+ bool slept = false;
+ long cpus;
+
+ cpus = sysconf(_SC_NPROCESSORS_ONLN);
+ printf("%ld CPUs in the system\n", cpus);
+
+ test_internal_key(slept, cpus);
+ if (keylocker_disabled)
+ return nerrs ? 1 : 0;
+
+ switch_to_sleep(&slept);
+ test_internal_key(slept, cpus);
+ return nerrs ? 1 : 0;
+}
--
2.17.1


2021-05-15 23:24:07

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [RFC PATCH v2 00/11] x86: Support Intel Key Locker

On 5/14/21 1:14 PM, Chang S. Bae wrote:
> Key Locker [1][2] is a new security feature available in new Intel CPUs to
> protect data encryption keys for the Advanced Encryption Standard
> algorithm. The protection limits the amount of time an AES key is exposed
> in memory by sealing a key and referencing it with new AES instructions.
>
> The new AES instruction set is a successor of Intel's AES-NI (AES New
> Instruction). Users may switch to the Key Locker version from crypto
> libraries. This series includes a new AES implementation for the Crypto
> API, which was validated through the crypto unit tests. The performance in
> the test cases was measured and found comparable to the AES-NI version.
>
> Key Locker introduces a (CPU-)internal key to encode AES keys. The kernel
> needs to load it and ensure it unchanged as long as CPUs are operational.

I have high-level questions:

What is the expected use case? My personal hypothesis, based on various
public Intel slides, is that the actual intended use case was internal
to the ME, and that KL was ported to end-user CPUs more or less
verbatim. I certainly understand how KL is valuable in a context where
a verified boot process installs some KL keys that are not subsequently
accessible outside the KL ISA, but Linux does not really work like this.
I'm wondering what people will use it for.

On a related note, does Intel plan to extend KL with ways to securely
load keys? (E.g. the ability to, in effect, LOADIWKEY from inside an
enclave? Key wrapping/unwrapping operations?) In other words, is
should we look at KL the way we look at MKTME, i.e. the foundation of
something neat but not necessarily very useful as is, or should we
expect that KL is in its more or less final form?


What is the expected interaction between a KL-using VM guest and the
host VMM? Will there be performance impacts (to context switching, for
example) if a guest enables KL, even if the guest does not subsequently
do anything with it? Should Linux actually enable KL if it detects that
it's a VM guest? Should Linux have use a specific keying method as a guest?

--Andy

2021-05-18 22:39:57

by Chang S. Bae

[permalink] [raw]
Subject: Re: [RFC PATCH v2 00/11] x86: Support Intel Key Locker

On May 15, 2021, at 11:01, Andy Lutomirski <[email protected]> wrote:
> On 5/14/21 1:14 PM, Chang S. Bae wrote:
>> Key Locker [1][2] is a new security feature available in new Intel CPUs to
>> protect data encryption keys for the Advanced Encryption Standard
>> algorithm. The protection limits the amount of time an AES key is exposed
>> in memory by sealing a key and referencing it with new AES instructions.
>>
>> The new AES instruction set is a successor of Intel's AES-NI (AES New
>> Instruction). Users may switch to the Key Locker version from crypto
>> libraries. This series includes a new AES implementation for the Crypto
>> API, which was validated through the crypto unit tests. The performance in
>> the test cases was measured and found comparable to the AES-NI version.
>>
>> Key Locker introduces a (CPU-)internal key to encode AES keys. The kernel
>> needs to load it and ensure it unchanged as long as CPUs are operational.
>
> I have high-level questions:
>
> What is the expected use case?

The wrapping key here is only used for new AES instructions.

I’m aware of their potential use cases for encrypting file system or disks.

> My personal hypothesis, based on various
> public Intel slides, is that the actual intended use case was internal
> to the ME, and that KL was ported to end-user CPUs more or less
> verbatim.

No, this is a separate one. The feature has nothing to do with the firmware
except that in some situations it merely helps to back up the key in its
state.

> I certainly understand how KL is valuable in a context where
> a verified boot process installs some KL keys that are not subsequently
> accessible outside the KL ISA, but Linux does not really work like this.

Do you mind elaborating on the concern? I try to understand any issue with
PATCH3 [1], specifically.

> I'm wondering what people will use it for.

Mentioned above.

> On a related note, does Intel plan to extend KL with ways to securely
> load keys? (E.g. the ability to, in effect, LOADIWKEY from inside an
> enclave? Key wrapping/unwrapping operations?) In other words, is
> should we look at KL the way we look at MKTME, i.e. the foundation of
> something neat but not necessarily very useful as is, or should we
> expect that KL is in its more or less final form?

All I have is pretty much in the spec. So, I think the latter is the case.

I don’t see anything about that LOADIWKEY inside an enclave in the spec. (A
relevant section is A.6.1 Key Locker Usage with TEE.)

> What is the expected interaction between a KL-using VM guest and the
> host VMM? Will there be performance impacts (to context switching, for
> example) if a guest enables KL, even if the guest does not subsequently
> do anything with it? Should Linux actually enable KL if it detects that
> it's a VM guest? Should Linux have use a specific keying method as a guest?

First of all, there is an RFC series for KVM [2].

Each CPU has one internal key state so it needs to reload it between guest and
host if both are enabled. The proposed approach enables it exclusively; expose
it to guests only when disabled in a host. Then, I guess a guest may enable it.

Thanks,
Chang

[1] https://lore.kernel.org/lkml/[email protected]/
[2] https://lore.kernel.org/kvm/[email protected]/

2021-05-18 23:35:13

by Dan Williams

[permalink] [raw]
Subject: Re: [RFC PATCH v2 00/11] x86: Support Intel Key Locker

On Mon, May 17, 2021 at 11:21 AM Bae, Chang Seok
<[email protected]> wrote:
>
> On May 15, 2021, at 11:01, Andy Lutomirski <[email protected]> wrote:
> > On 5/14/21 1:14 PM, Chang S. Bae wrote:
> >> Key Locker [1][2] is a new security feature available in new Intel CPUs to
> >> protect data encryption keys for the Advanced Encryption Standard
> >> algorithm. The protection limits the amount of time an AES key is exposed
> >> in memory by sealing a key and referencing it with new AES instructions.
> >>
> >> The new AES instruction set is a successor of Intel's AES-NI (AES New
> >> Instruction). Users may switch to the Key Locker version from crypto
> >> libraries. This series includes a new AES implementation for the Crypto
> >> API, which was validated through the crypto unit tests. The performance in
> >> the test cases was measured and found comparable to the AES-NI version.
> >>
> >> Key Locker introduces a (CPU-)internal key to encode AES keys. The kernel
> >> needs to load it and ensure it unchanged as long as CPUs are operational.
> >
> > I have high-level questions:
> >
> > What is the expected use case?
>
> The wrapping key here is only used for new AES instructions.
>
> I’m aware of their potential use cases for encrypting file system or disks.
>
> > My personal hypothesis, based on various
> > public Intel slides, is that the actual intended use case was internal
> > to the ME, and that KL was ported to end-user CPUs more or less
> > verbatim.
>
> No, this is a separate one. The feature has nothing to do with the firmware
> except that in some situations it merely helps to back up the key in its
> state.
>
> > I certainly understand how KL is valuable in a context where
> > a verified boot process installs some KL keys that are not subsequently
> > accessible outside the KL ISA, but Linux does not really work like this.
>
> Do you mind elaborating on the concern? I try to understand any issue with
> PATCH3 [1], specifically.

If I understand Andy's concern it is the observation that the weakest
link in this facility is the initial key load. Yes, KL reduces
exposure after that event, but the key loading process is still
vulnerable. This question is similar to the concern between the Linux
"encrypted-keys" and "trusted-keys" interface. The trusted-keys
interface still has an attack window where the key is unwrapped in
kernel space to decrypt the sub-keys, but that exposure need not cross
the user-kernel boundary and can be time-limited to a given PCR state.
The encrypted-keys interface maintains the private-key material
outside the kernel where it has increased exposure. KL is effectively
"encrypted-keys" and Andy is questioning whether this makes KL similar
to the MKTME vs SGX / TDX situation.

>
> > I'm wondering what people will use it for.
>
> Mentioned above.

I don't think this answers Andy's question. There is a distinction
between what it can be used for and what people will deploy with it in
practice given the "encrypted-keys"-like exposure. Clarify the end
user benefit that motivates the kernel to carry this support.

2021-05-19 03:37:17

by Sean Christopherson

[permalink] [raw]
Subject: Re: [RFC PATCH v2 00/11] x86: Support Intel Key Locker

On Mon, May 17, 2021, Bae, Chang Seok wrote:
> On May 15, 2021, at 11:01, Andy Lutomirski <[email protected]> wrote:
> > What is the expected interaction between a KL-using VM guest and the
> > host VMM?

Messy. :-)

> > Will there be performance impacts (to context switching, for
> > example) if a guest enables KL, even if the guest does not subsequently
> > do anything with it?

Short answer, yes. But the proposed solution is to disallow KL in KVM guests if
KL is in use by the host. The problem is that, by design, the host can't restore
its key via LOADIWKEY because the whole point is to throw away the real key. To
restore its value, the host would need to use the platform backup/restore
mechanism, which is comically slow (tens of thousands of cycles).

If KL virtualization is mutually exclusive with use in the host, then IIRC the
context switching penalty is only paid by vCPUs that have executed LOADIWKEY, as
other tasks can safely run with a stale/bogus key.

> > Should Linux actually enable KL if it detects that it's a VM guest?

Probably not by default. It shouldn't even be considered unless the VMM is
trusted, as a malicious VMM can completely subvert KL. Even if the host is
trusted, it's not clear that the tradeoffs are a net win.

Practically speaking, VMMs have to either (a) save the real key in host memory
or (b) provide a single VM exclusive access to the underlying hardware.

For (a), that rules out using an ephemeral, random key, as using a truly random
key prevents the VMM from saving/restoring the real key. That means the guest
has to generate its own key, and the host has to also store the key in memory.
There are also potential performance and live migration implications. The only
benefit to using KL in the guest is that the real key is not stored in _guest_
accessible memory. So it probably reduces the attack surface, but on the other
hand the VMM may store the guest's master key in a known location, which might
make cross-VM attacks easier in some ways.

(b) is a fairly unlikely scenario, and certainly can't be assumed to be the
default scenario for a guest.

> > Should Linux have use a specific keying method as a guest?

Could you rephrase this question? I didn't follow.

> First of all, there is an RFC series for KVM [2].

That series also fails to address the use case question.

[*] https://lore.kernel.org/kvm/YGs07I%[email protected]/

2021-05-19 06:01:40

by Chang S. Bae

[permalink] [raw]
Subject: Re: [RFC PATCH v2 00/11] x86: Support Intel Key Locker

On May 17, 2021, at 11:45, Dan Williams <[email protected]> wrote:
> On Mon, May 17, 2021 at 11:21 AM Bae, Chang Seok
> <[email protected]> wrote:
>>
>> On May 15, 2021, at 11:01, Andy Lutomirski <[email protected]> wrote:
>>>
>>>
>>> I certainly understand how KL is valuable in a context where
>>> a verified boot process installs some KL keys that are not subsequently
>>> accessible outside the KL ISA, but Linux does not really work like this.
>>
>> Do you mind elaborating on the concern? I try to understand any issue with
>> PATCH3 [1], specifically.
>
> If I understand Andy's concern it is the observation that the weakest
> link in this facility is the initial key load. Yes, KL reduces
> exposure after that event, but the key loading process is still
> vulnerable. This question is similar to the concern between the Linux
> "encrypted-keys" and "trusted-keys" interface. The trusted-keys
> interface still has an attack window where the key is unwrapped in
> kernel space to decrypt the sub-keys, but that exposure need not cross
> the user-kernel boundary and can be time-limited to a given PCR state.
> The encrypted-keys interface maintains the private-key material
> outside the kernel where it has increased exposure. KL is effectively
> "encrypted-keys" and Andy is questioning whether this makes KL similar
> to the MKTME vs SGX / TDX situation.

I don’t fully grasp the MKTME vs SGX/TDX background, but LOADIWKEY provides a
hardware randomization option for the initial load. Then, the internal key is
unknown. Nonetheless, if one does not trust this randomization and decides
not to use it, then perhaps unavoidable is the key in memory sometime during
boot-time.

I think Dan just gave an example here, but FWIW, these “encrypted-keys” and
“trusted-keys” are for the kernel keyring service. I wish to clarify the
keyring service itself is not intended usage here. Instead, this series is
intended to focus on the kernel Crypto API, as this technology protects AES
keys during data transformation time.

>>> I'm wondering what people will use it for.
>>
>> Mentioned above.
>
> I don't think this answers Andy's question. There is a distinction
> between what it can be used for and what people will deploy with it in
> practice given the "encrypted-keys"-like exposure. Clarify the end
> user benefit that motivates the kernel to carry this support.

The end-user of this series will benefit from key protection at data
transformation time and also be provided with matched performance as AES-NI
does.

Thanks,
Chang

2021-05-19 18:28:17

by Sean Christopherson

[permalink] [raw]
Subject: Re: [RFC PATCH v2 00/11] x86: Support Intel Key Locker

On Tue, May 18, 2021, Andy Lutomirski wrote:
> On 5/17/21 11:21 AM, Bae, Chang Seok wrote:
> > First of all, there is an RFC series for KVM [2].
> >
> > Each CPU has one internal key state so it needs to reload it between guest and
> > host if both are enabled. The proposed approach enables it exclusively; expose
> > it to guests only when disabled in a host. Then, I guess a guest may enable it.
>
> I read that series. This is not a good solution.
>
> I can think of at least a few reasonable ways that a host and a guest
> can cooperate to, potentially, make KL useful.
>
> a) Host knows that the guest will never migrate, and guest delegates
> IWKEY management to the host. The host generates a random key and does
> not permit the guest to use LOADIWKEY. The guest shares the random key
> with the host. Of course, this means that a host key handle that leaks
> to a guest can be used within the guest.

If the guest and host share a random key, then they also share the key handle.
And that handle+key would also need to be shared across all guests. I doubt this
option is acceptable on the security front.

Using multiple random keys is a non-starter because they can't be restored via
LOADIWKEY.

Using multiple software-defined keys will have moderate overhead because of the
possibility of using KL from soft IRQ context, i.e. KVM would have to do
LOADIWKEY on every VM-Enter _and_ VM-Exit. It sounds like LOADIWKEY has latency
similar to WRMSR, so it's not a deal-breaker, but the added latency on top of the
restrictions on how the host can use KL certainly lessen the appeal.

> b) Host may migrate the guest. Guest delegates IWKEY management to the
> host, and the host generates and remembers a key for the guest. On
> migration, the host forwards the key to the new host. The host can
> still internally any type of key, but context switches may be quite slow.

Migrating is sketchy because the IWKEY has to be exposed to host userspace.
But, I think the migration aspect is a secondary discussion.

> c) Guest wants to manage its own non-random key. Host lets it and
> context switches it.

This is essentially a variant of (b). In both cases, the host has full control
over the guest's key.

> d) Guest does not need KL and leaves CR4.KL clear. Host does whatever
> it wants with no overhead.
>
> All of these have tradeoffs.
>
> My current thought is that, if Linux is going to support Key Locker,
> then this all needs to be explicitly controlled. On initial boot, Linux
> should not initialize Key Locker. Upon explicit administrator request
> (via sysfs?), Linux will initialize Key Locker in the mode requested by
> the administrator.

Deferring KL usage to post-boot can work, but KVM shouldn't be allowed to expose
KL to a guest until KL has been explicitly configured in the host. If KVM can
spawn KL guests before the host is configured, the sysfs knob would have to deal
with the case where the desired configuration is incompatible with exposing KL
to a guest.

> Modes could include:
>
> native_random_key: Use a random key per the ISA.
>
> native_kernel_key_remember: Use a random key but load it as a non-random
> key. Remember the key in kernel memory and use it for S3 resume, etc.

What would be the motivation for this mode? It largely defeats the value
proposition of KL, no?

> native_kernel_key_backup: Use a random key, put it in the backup
> storage, and forget it. Use the backup for resume, etc.
>
> native_kernel_key_norestore: Use a random key. The key is lost on any
> power transition that forgets the key. Backup is not used.
>
> paravirt_any: Ask the hypervisor to handle keying. Any mechanism is
> acceptable.
>
> paravirt_random: Ask the hypervisor for a random key. Only succeeds if
> we get an actual random key.

AFAIK, there's no way for the guest to verify that it got a truly random key.
Hell, the guest can't even easily verify that KL is even supported. The host
can lie about CPUID and CR4.KL, and intercept all KL instructions via #UD by
running the guest with CR4.KL=0.

I also don't see any reason to define a paravirt interface for a truly random
key. Using a random key all but requires a single guest to have exclusive access
to KL, and in that case the host can simply expose KL to only that guest.

> Does this make sense?

I really want to use see concrete guest use cases before we start adding paravirt
interfaces.

2021-05-19 23:27:27

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [RFC PATCH v2 00/11] x86: Support Intel Key Locker

On 5/18/21 10:52 AM, Sean Christopherson wrote:
> On Tue, May 18, 2021, Andy Lutomirski wrote:
>> On 5/17/21 11:21 AM, Bae, Chang Seok wrote:
>>> First of all, there is an RFC series for KVM [2].
>>>
>>> Each CPU has one internal key state so it needs to reload it between guest and
>>> host if both are enabled. The proposed approach enables it exclusively; expose
>>> it to guests only when disabled in a host. Then, I guess a guest may enable it.
>>
>> I read that series. This is not a good solution.
>>
>> I can think of at least a few reasonable ways that a host and a guest
>> can cooperate to, potentially, make KL useful.
>>
>> a) Host knows that the guest will never migrate, and guest delegates
>> IWKEY management to the host. The host generates a random key and does
>> not permit the guest to use LOADIWKEY. The guest shares the random key
>> with the host. Of course, this means that a host key handle that leaks
>> to a guest can be used within the guest.
>
> If the guest and host share a random key, then they also share the key handle.
> And that handle+key would also need to be shared across all guests. I doubt this
> option is acceptable on the security front.
>

Indeed. Oddly, SGX has the exact same problem for any scenario in which
SGX is used for HSM-like functionality, and people still use SGX.

However, I suspect that there will be use cases in which exactly one VM
is permitted to use KL. Qubes might want that (any Qubes people around?)

> Using multiple random keys is a non-starter because they can't be restored via
> LOADIWKEY.
>
> Using multiple software-defined keys will have moderate overhead because of the
> possibility of using KL from soft IRQ context, i.e. KVM would have to do
> LOADIWKEY on every VM-Enter _and_ VM-Exit. It sounds like LOADIWKEY has latency
> similar to WRMSR, so it's not a deal-breaker, but the added latency on top of the
> restrictions on how the host can use KL certainly lessen the appeal.

Indeed. This stinks.

>
>> b) Host may migrate the guest. Guest delegates IWKEY management to the
>> host, and the host generates and remembers a key for the guest. On
>> migration, the host forwards the key to the new host. The host can
>> still internally any type of key, but context switches may be quite slow.
>
> Migrating is sketchy because the IWKEY has to be exposed to host userspace.
> But, I think the migration aspect is a secondary discussion.
>
>> c) Guest wants to manage its own non-random key. Host lets it and
>> context switches it.
>
> This is essentially a variant of (b). In both cases, the host has full control
> over the guest's key.
>
>> d) Guest does not need KL and leaves CR4.KL clear. Host does whatever
>> it wants with no overhead.
>>
>> All of these have tradeoffs.
>>
>> My current thought is that, if Linux is going to support Key Locker,
>> then this all needs to be explicitly controlled. On initial boot, Linux
>> should not initialize Key Locker. Upon explicit administrator request
>> (via sysfs?), Linux will initialize Key Locker in the mode requested by
>> the administrator.
>
> Deferring KL usage to post-boot can work, but KVM shouldn't be allowed to expose
> KL to a guest until KL has been explicitly configured in the host. If KVM can
> spawn KL guests before the host is configured, the sysfs knob would have to deal
> with the case where the desired configuration is incompatible with exposing KL
> to a guest.

There could be a host configuration "guest_only", perhaps.

>
>> Modes could include:
>>
>> native_random_key: Use a random key per the ISA.
>>
>> native_kernel_key_remember: Use a random key but load it as a non-random
>> key. Remember the key in kernel memory and use it for S3 resume, etc.
>
> What would be the motivation for this mode? It largely defeats the value
> proposition of KL, no?

It lets userspace use KL with some degree of security.

>
>> native_kernel_key_backup: Use a random key, put it in the backup
>> storage, and forget it. Use the backup for resume, etc.
>>
>> native_kernel_key_norestore: Use a random key. The key is lost on any
>> power transition that forgets the key. Backup is not used.
>>
>> paravirt_any: Ask the hypervisor to handle keying. Any mechanism is
>> acceptable.
>>
>> paravirt_random: Ask the hypervisor for a random key. Only succeeds if
>> we get an actual random key.
>
> AFAIK, there's no way for the guest to verify that it got a truly random key.
> Hell, the guest can't even easily verify that KL is even supported. The host
> can lie about CPUID and CR4.KL, and intercept all KL instructions via #UD by
> running the guest with CR4.KL=0.

The guest can use TDX. Oh wait, TDX doesn't support KL.

That being said, a host attack on the guest of this sort would be quite
slow.

>
> I also don't see any reason to define a paravirt interface for a truly random
> key. Using a random key all but requires a single guest to have exclusive access
> to KL, and in that case the host can simply expose KL to only that guest.
>
>> Does this make sense?
>
> I really want to use see concrete guest use cases before we start adding paravirt
> interfaces.
>

I want to see concrete guest use cases before we start adding *any*
guest support. And this cuts both ways -- I think that, until the guest
use cases are at least somewhat worked out, Linux should certainly not
initialize KL by default on boot if the CPUID hypervisor bit is set.

2021-05-19 23:35:29

by Sean Christopherson

[permalink] [raw]
Subject: Re: [RFC PATCH v2 00/11] x86: Support Intel Key Locker

On Wed, May 19, 2021, Andy Lutomirski wrote:
> On 5/18/21 10:52 AM, Sean Christopherson wrote:
> > On Tue, May 18, 2021, Andy Lutomirski wrote:
> >> On 5/17/21 11:21 AM, Bae, Chang Seok wrote:
> >>> First of all, there is an RFC series for KVM [2].
> >>>
> >>> Each CPU has one internal key state so it needs to reload it between guest and
> >>> host if both are enabled. The proposed approach enables it exclusively; expose
> >>> it to guests only when disabled in a host. Then, I guess a guest may enable it.
> >>
> >> I read that series. This is not a good solution.
> >>
> >> I can think of at least a few reasonable ways that a host and a guest
> >> can cooperate to, potentially, make KL useful.
> >>
> >> a) Host knows that the guest will never migrate, and guest delegates
> >> IWKEY management to the host. The host generates a random key and does
> >> not permit the guest to use LOADIWKEY. The guest shares the random key
> >> with the host. Of course, this means that a host key handle that leaks
> >> to a guest can be used within the guest.
> >
> > If the guest and host share a random key, then they also share the key handle.
> > And that handle+key would also need to be shared across all guests. I doubt this
> > option is acceptable on the security front.
> >
>
> Indeed. Oddly, SGX has the exact same problem for any scenario in which
> SGX is used for HSM-like functionality, and people still use SGX.

The entire PRM/EPC shares a single key, but SGX doesn't rely on encryption to
isolate enclaves from other software, including other enclaves. E.g. Intel could
ship a CPU with the EPC backed entirely by on-die cache and avoid hardware
encryption entirely.

> However, I suspect that there will be use cases in which exactly one VM
> is permitted to use KL. Qubes might want that (any Qubes people around?)

2021-05-20 00:01:35

by Sean Christopherson

[permalink] [raw]
Subject: Re: [RFC PATCH v2 00/11] x86: Support Intel Key Locker

On Wed, May 19, 2021, Sean Christopherson wrote:
> On Wed, May 19, 2021, Andy Lutomirski wrote:
> > On 5/18/21 10:52 AM, Sean Christopherson wrote:
> > > On Tue, May 18, 2021, Andy Lutomirski wrote:
> > >> On 5/17/21 11:21 AM, Bae, Chang Seok wrote:
> > >>> First of all, there is an RFC series for KVM [2].
> > >>>
> > >>> Each CPU has one internal key state so it needs to reload it between guest and
> > >>> host if both are enabled. The proposed approach enables it exclusively; expose
> > >>> it to guests only when disabled in a host. Then, I guess a guest may enable it.
> > >>
> > >> I read that series. This is not a good solution.
> > >>
> > >> I can think of at least a few reasonable ways that a host and a guest
> > >> can cooperate to, potentially, make KL useful.
> > >>
> > >> a) Host knows that the guest will never migrate, and guest delegates
> > >> IWKEY management to the host. The host generates a random key and does
> > >> not permit the guest to use LOADIWKEY. The guest shares the random key
> > >> with the host. Of course, this means that a host key handle that leaks
> > >> to a guest can be used within the guest.
> > >
> > > If the guest and host share a random key, then they also share the key handle.
> > > And that handle+key would also need to be shared across all guests. I doubt this
> > > option is acceptable on the security front.
> > >
> >
> > Indeed. Oddly, SGX has the exact same problem for any scenario in which
> > SGX is used for HSM-like functionality, and people still use SGX.
>
> The entire PRM/EPC shares a single key, but SGX doesn't rely on encryption to
> isolate enclaves from other software, including other enclaves. E.g. Intel could
> ship a CPU with the EPC backed entirely by on-die cache and avoid hardware
> encryption entirely.

Ha! I belatedly see your point: in the end, virtualized KL would also rely on a
trusted entity to isolate its sensitive data via paging-like mechanisms.

The difference in my mind is that encryption is a means to an end for SGX,
whereas hiding the key is the entire point of KL. E.g. the guest is already
relying on the VMM to isolate its code and data, adding KL doesn't change that.
Sharing an IWKEY across multiple guests would add intra-VM protection, at the
cost of making cross-VM attacks easier to some degree.

2021-12-06 21:48:22

by Chang S. Bae

[permalink] [raw]
Subject: Re: [RFC PATCH v2 00/11] x86: Support Intel Key Locker

On May 18, 2021, at 10:10, Andy Lutomirski <[email protected]> wrote:
> On 5/17/21 11:21 AM, Bae, Chang Seok wrote:
>> On May 15, 2021, at 11:01, Andy Lutomirski <[email protected]> wrote:
>>>
>>> I have high-level questions:
>>>
>>> What is the expected use case?
>>
>> The wrapping key here is only used for new AES instructions.
>>
>> I’m aware of their potential use cases for encrypting file system or disks.
>
> I would like to understand what people are actually going to do with
> this. Give me a user story or two, please. If it turns out to be
> useless, I would rather not merge it.

Hi Andy,

V3 was posted here with both cover letter and code changes to address this:
https://lore.kernel.org/lkml/[email protected]/

Appreciate, if you can comment on the use case at least.

Thanks,
Chang