2023-11-28 13:01:21

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 00/16] Add Secure TSC support for SNP guests

Secure TSC allows guests to securely use RDTSC/RDTSCP instructions as the
parameters being used cannot be changed by hypervisor once the guest is
launched. More details in the AMD64 APM Vol 2, Section "Secure TSC".

During the boot-up of the secondary cpus, SecureTSC enabled guests need to
query TSC info from AMD Security Processor. This communication channel is
encrypted between the AMD Security Processor and the guest, the hypervisor
is just the conduit to deliver the guest messages to the AMD Security
Processor. Each message is protected with an AEAD (AES-256 GCM). See "SEV
Secure Nested Paging Firmware ABI Specification" document (currently at
https://www.amd.com/system/files/TechDocs/56860.pdf) section "TSC Info"

Use a minimal GCM library to encrypt/decrypt SNP Guest messages to
communicate with the AMD Security Processor which is available at early
boot.

SEV-guest driver has the implementation for guest and AMD Security
Processor communication. As the TSC_INFO needs to be initialized during
early boot before smp cpus are started, move most of the sev-guest driver
code to kernel/sev.c and provide well defined APIs to the sev-guest driver
to use the interface to avoid code-duplication.

Patches:
01-08: Preparation and movement of sev-guest driver code
09-16: SecureTSC enablement patches.

Testing SecureTSC
-----------------

SecureTSC hypervisor patches based on top of SEV-SNP Guest MEMFD series:
https://github.com/nikunjad/linux/tree/snp-host-latest-securetsc_v5

QEMU changes:
https://github.com/nikunjad/qemu/tree/snp_securetsc_v5

QEMU commandline SEV-SNP-UPM with SecureTSC:

qemu-system-x86_64 -cpu EPYC-Milan-v2,+secure-tsc,+invtsc -smp 4 \
-object memory-backend-memfd-private,id=ram1,size=1G,share=true \
-object sev-snp-guest,id=sev0,cbitpos=51,reduced-phys-bits=1,secure-tsc=on \
-machine q35,confidential-guest-support=sev0,memory-backend=ram1,kvm-type=snp \
...

Changelog:
----------
v6:
* Add synthetic SecureTSC x86 feature bit
* Drop {__enc,dec}_payload() as they are pretty small and has one caller.
* Instead of a pointer use data_npages as variable
* Beautify struct snp_guest_req
* Make vmpck_id as unsigned int in snp_assign_vmpck()
* Move most of the functions to end of sev.c file
* Update commit/comments/error messages
* Mark free_shared_pages and alloc_shared_pages as inline
* Free snp_dev->certs_data when guest driver is removed
* Add lockdep assert in snp_inc_msg_seqno()
* Drop redundant enc_init NULL check
* Move SNP_TSC_INFO_REQ_SZ define out of structure
* Rename guest_tsc_{scale,offset} to snp_tsc_{scale,offset}
* Add new linux termination error code GHCB_TERM_SECURE_TSC
* Initialize and use cmd_mutex in snp_get_tsc_info()
* Set TSC as reliable in sme_early_init()
* Do not print firmware bug for Secure TSC enabled guests

v5:
* Rebased on v6.6 kernel
* Dropped link tag in first patch
* Dropped get_ctx_authsize() as it was redundant

https://lore.kernel.org/lkml/[email protected]/

v4:
* Drop handle_guest_request() and handle_guest_request_ext()
* Drop NULL check for key
* Corrected commit subject
* Added Reviewed-by from Tom

https://lore.kernel.org/lkml/[email protected]/

v3:
* Updated commit messages
* Made snp_setup_psp_messaging() generic that is accessed by both the
kernel and the driver
* Moved most of the context information to sev.c, sev-guest driver
does not need to know the secrets page layout anymore
* Add CC_ATTR_GUEST_SECURE_TSC early in the series therefore it can be
used in later patches.
* Removed data_gpa and data_npages from struct snp_req_data, as certs_data
and its size is passed to handle_guest_request_ext()
* Make vmpck_id as unsigned int
* Dropped unnecessary usage of memzero_explicit()
* Cache secrets_pa instead of remapping the cc_blob always
* Rebase on top of v6.4 kernel
https://lore.kernel.org/lkml/[email protected]/

v2:
* Rebased on top of v6.3-rc3 that has Boris's sev-guest cleanup series
https://lore.kernel.org/r/[email protected]/

v1: https://lore.kernel.org/r/[email protected]/

Nikunj A Dadhania (16):
virt: sev-guest: Use AES GCM crypto library
virt: sev-guest: Move mutex to SNP guest device structure
virt: sev-guest: Replace dev_dbg with pr_debug
virt: sev-guest: Add SNP guest request structure
virt: sev-guest: Add vmpck_id to snp_guest_dev struct
x86/sev: Cache the secrets page address
x86/sev: Move and reorganize sev guest request api
x86/mm: Add generic guest initialization hook
x86/cpufeatures: Add synthetic Secure TSC bit
x86/sev: Add Secure TSC support for SNP guests
x86/sev: Change TSC MSR behavior for Secure TSC enabled guests
x86/sev: Prevent RDTSC/RDTSCP interception for Secure TSC enabled
guests
x86/kvmclock: Skip kvmclock when Secure TSC is available
x86/sev: Mark Secure TSC as reliable
x86/cpu/amd: Do not print FW_BUG for Secure TSC
x86/sev: Enable Secure TSC for SNP guests

arch/x86/Kconfig | 1 +
arch/x86/boot/compressed/sev.c | 3 +-
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/sev-common.h | 1 +
arch/x86/include/asm/sev-guest.h | 191 +++++++
arch/x86/include/asm/sev.h | 20 +-
arch/x86/include/asm/svm.h | 6 +-
arch/x86/include/asm/x86_init.h | 2 +
arch/x86/kernel/cpu/amd.c | 3 +-
arch/x86/kernel/kvmclock.c | 2 +-
arch/x86/kernel/sev-shared.c | 10 +
arch/x86/kernel/sev.c | 622 ++++++++++++++++++++--
arch/x86/kernel/x86_init.c | 2 +
arch/x86/mm/mem_encrypt.c | 12 +-
arch/x86/mm/mem_encrypt_amd.c | 11 +
drivers/virt/coco/sev-guest/Kconfig | 3 -
drivers/virt/coco/sev-guest/sev-guest.c | 661 +++---------------------
drivers/virt/coco/sev-guest/sev-guest.h | 63 ---
18 files changed, 888 insertions(+), 726 deletions(-)
create mode 100644 arch/x86/include/asm/sev-guest.h
delete mode 100644 drivers/virt/coco/sev-guest/sev-guest.h


base-commit: 98b1cc82c4affc16f5598d4fa14b1858671b2263
--
2.34.1


2023-11-28 13:01:23

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 01/16] virt: sev-guest: Use AES GCM crypto library

The sev-guest driver encryption code uses Crypto API for SNP guest
messaging to interact with AMD Security processor. For enabling SecureTSC,
SEV-SNP guests need to send a TSC_INFO request guest message before the
smpboot phase starts. Details from the TSC_INFO response will be used to
program the VMSA before the secondary CPUs are brought up. The Crypto API
is not available this early in the boot phase.

In preparation of moving the encryption code out of sev-guest driver to
support SecureTSC and make reviewing the diff easier, start using AES GCM
library implementation instead of Crypto API.

Drop __enc_payload() and dec_payload() helpers as both are pretty small and
can be moved to the respective callers.

CC: Ard Biesheuvel <[email protected]>
Signed-off-by: Nikunj A Dadhania <[email protected]>
Reviewed-by: Tom Lendacky <[email protected]>
---
drivers/virt/coco/sev-guest/Kconfig | 4 +-
drivers/virt/coco/sev-guest/sev-guest.c | 175 ++++++------------------
drivers/virt/coco/sev-guest/sev-guest.h | 3 +
3 files changed, 43 insertions(+), 139 deletions(-)

diff --git a/drivers/virt/coco/sev-guest/Kconfig b/drivers/virt/coco/sev-guest/Kconfig
index 1cffc72c41cb..0b772bd921d8 100644
--- a/drivers/virt/coco/sev-guest/Kconfig
+++ b/drivers/virt/coco/sev-guest/Kconfig
@@ -2,9 +2,7 @@ config SEV_GUEST
tristate "AMD SEV Guest driver"
default m
depends on AMD_MEM_ENCRYPT
- select CRYPTO
- select CRYPTO_AEAD2
- select CRYPTO_GCM
+ select CRYPTO_LIB_AESGCM
select TSM_REPORTS
help
SEV-SNP firmware provides the guest a mechanism to communicate with
diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c
index bc564adcf499..aedc842781b6 100644
--- a/drivers/virt/coco/sev-guest/sev-guest.c
+++ b/drivers/virt/coco/sev-guest/sev-guest.c
@@ -17,8 +17,7 @@
#include <linux/set_memory.h>
#include <linux/fs.h>
#include <linux/tsm.h>
-#include <crypto/aead.h>
-#include <linux/scatterlist.h>
+#include <crypto/gcm.h>
#include <linux/psp-sev.h>
#include <linux/sockptr.h>
#include <linux/cleanup.h>
@@ -32,24 +31,16 @@
#include "sev-guest.h"

#define DEVICE_NAME "sev-guest"
-#define AAD_LEN 48
-#define MSG_HDR_VER 1

#define SNP_REQ_MAX_RETRY_DURATION (60*HZ)
#define SNP_REQ_RETRY_DELAY (2*HZ)

-struct snp_guest_crypto {
- struct crypto_aead *tfm;
- u8 *iv, *authtag;
- int iv_len, a_len;
-};
-
struct snp_guest_dev {
struct device *dev;
struct miscdevice misc;

void *certs_data;
- struct snp_guest_crypto *crypto;
+ struct aesgcm_ctx *ctx;
/* request and response are in unencrypted memory */
struct snp_guest_msg *request, *response;

@@ -161,132 +152,31 @@ static inline struct snp_guest_dev *to_snp_dev(struct file *file)
return container_of(dev, struct snp_guest_dev, misc);
}

-static struct snp_guest_crypto *init_crypto(struct snp_guest_dev *snp_dev, u8 *key, size_t keylen)
+static struct aesgcm_ctx *snp_init_crypto(u8 *key, size_t keylen)
{
- struct snp_guest_crypto *crypto;
+ struct aesgcm_ctx *ctx;

- crypto = kzalloc(sizeof(*crypto), GFP_KERNEL_ACCOUNT);
- if (!crypto)
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL_ACCOUNT);
+ if (!ctx)
return NULL;

- crypto->tfm = crypto_alloc_aead("gcm(aes)", 0, 0);
- if (IS_ERR(crypto->tfm))
- goto e_free;
-
- if (crypto_aead_setkey(crypto->tfm, key, keylen))
- goto e_free_crypto;
-
- crypto->iv_len = crypto_aead_ivsize(crypto->tfm);
- crypto->iv = kmalloc(crypto->iv_len, GFP_KERNEL_ACCOUNT);
- if (!crypto->iv)
- goto e_free_crypto;
-
- if (crypto_aead_authsize(crypto->tfm) > MAX_AUTHTAG_LEN) {
- if (crypto_aead_setauthsize(crypto->tfm, MAX_AUTHTAG_LEN)) {
- dev_err(snp_dev->dev, "failed to set authsize to %d\n", MAX_AUTHTAG_LEN);
- goto e_free_iv;
- }
+ if (aesgcm_expandkey(ctx, key, keylen, AUTHTAG_LEN)) {
+ pr_err("Crypto context initialization failed\n");
+ kfree(ctx);
+ return NULL;
}

- crypto->a_len = crypto_aead_authsize(crypto->tfm);
- crypto->authtag = kmalloc(crypto->a_len, GFP_KERNEL_ACCOUNT);
- if (!crypto->authtag)
- goto e_free_iv;
-
- return crypto;
-
-e_free_iv:
- kfree(crypto->iv);
-e_free_crypto:
- crypto_free_aead(crypto->tfm);
-e_free:
- kfree(crypto);
-
- return NULL;
-}
-
-static void deinit_crypto(struct snp_guest_crypto *crypto)
-{
- crypto_free_aead(crypto->tfm);
- kfree(crypto->iv);
- kfree(crypto->authtag);
- kfree(crypto);
-}
-
-static int enc_dec_message(struct snp_guest_crypto *crypto, struct snp_guest_msg *msg,
- u8 *src_buf, u8 *dst_buf, size_t len, bool enc)
-{
- struct snp_guest_msg_hdr *hdr = &msg->hdr;
- struct scatterlist src[3], dst[3];
- DECLARE_CRYPTO_WAIT(wait);
- struct aead_request *req;
- int ret;
-
- req = aead_request_alloc(crypto->tfm, GFP_KERNEL);
- if (!req)
- return -ENOMEM;
-
- /*
- * AEAD memory operations:
- * +------ AAD -------+------- DATA -----+---- AUTHTAG----+
- * | msg header | plaintext | hdr->authtag |
- * | bytes 30h - 5Fh | or | |
- * | | cipher | |
- * +------------------+------------------+----------------+
- */
- sg_init_table(src, 3);
- sg_set_buf(&src[0], &hdr->algo, AAD_LEN);
- sg_set_buf(&src[1], src_buf, hdr->msg_sz);
- sg_set_buf(&src[2], hdr->authtag, crypto->a_len);
-
- sg_init_table(dst, 3);
- sg_set_buf(&dst[0], &hdr->algo, AAD_LEN);
- sg_set_buf(&dst[1], dst_buf, hdr->msg_sz);
- sg_set_buf(&dst[2], hdr->authtag, crypto->a_len);
-
- aead_request_set_ad(req, AAD_LEN);
- aead_request_set_tfm(req, crypto->tfm);
- aead_request_set_callback(req, 0, crypto_req_done, &wait);
-
- aead_request_set_crypt(req, src, dst, len, crypto->iv);
- ret = crypto_wait_req(enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req), &wait);
-
- aead_request_free(req);
- return ret;
-}
-
-static int __enc_payload(struct snp_guest_dev *snp_dev, struct snp_guest_msg *msg,
- void *plaintext, size_t len)
-{
- struct snp_guest_crypto *crypto = snp_dev->crypto;
- struct snp_guest_msg_hdr *hdr = &msg->hdr;
-
- memset(crypto->iv, 0, crypto->iv_len);
- memcpy(crypto->iv, &hdr->msg_seqno, sizeof(hdr->msg_seqno));
-
- return enc_dec_message(crypto, msg, plaintext, msg->payload, len, true);
-}
-
-static int dec_payload(struct snp_guest_dev *snp_dev, struct snp_guest_msg *msg,
- void *plaintext, size_t len)
-{
- struct snp_guest_crypto *crypto = snp_dev->crypto;
- struct snp_guest_msg_hdr *hdr = &msg->hdr;
-
- /* Build IV with response buffer sequence number */
- memset(crypto->iv, 0, crypto->iv_len);
- memcpy(crypto->iv, &hdr->msg_seqno, sizeof(hdr->msg_seqno));
-
- return enc_dec_message(crypto, msg, msg->payload, plaintext, len, false);
+ return ctx;
}

static int verify_and_dec_payload(struct snp_guest_dev *snp_dev, void *payload, u32 sz)
{
- struct snp_guest_crypto *crypto = snp_dev->crypto;
struct snp_guest_msg *resp = &snp_dev->secret_response;
struct snp_guest_msg *req = &snp_dev->secret_request;
struct snp_guest_msg_hdr *req_hdr = &req->hdr;
struct snp_guest_msg_hdr *resp_hdr = &resp->hdr;
+ struct aesgcm_ctx *ctx = snp_dev->ctx;
+ u8 iv[GCM_AES_IV_SIZE] = {};

dev_dbg(snp_dev->dev, "response [seqno %lld type %d version %d sz %d]\n",
resp_hdr->msg_seqno, resp_hdr->msg_type, resp_hdr->msg_version, resp_hdr->msg_sz);
@@ -307,11 +197,16 @@ static int verify_and_dec_payload(struct snp_guest_dev *snp_dev, void *payload,
* If the message size is greater than our buffer length then return
* an error.
*/
- if (unlikely((resp_hdr->msg_sz + crypto->a_len) > sz))
+ if (unlikely((resp_hdr->msg_sz + ctx->authsize) > sz))
return -EBADMSG;

/* Decrypt the payload */
- return dec_payload(snp_dev, resp, payload, resp_hdr->msg_sz + crypto->a_len);
+ memcpy(iv, &resp_hdr->msg_seqno, sizeof(resp_hdr->msg_seqno));
+ if (!aesgcm_decrypt(ctx, payload, resp->payload, resp_hdr->msg_sz,
+ &resp_hdr->algo, AAD_LEN, iv, resp_hdr->authtag))
+ return -EBADMSG;
+
+ return 0;
}

static int enc_payload(struct snp_guest_dev *snp_dev, u64 seqno, int version, u8 type,
@@ -319,6 +214,8 @@ static int enc_payload(struct snp_guest_dev *snp_dev, u64 seqno, int version, u8
{
struct snp_guest_msg *req = &snp_dev->secret_request;
struct snp_guest_msg_hdr *hdr = &req->hdr;
+ struct aesgcm_ctx *ctx = snp_dev->ctx;
+ u8 iv[GCM_AES_IV_SIZE] = {};

memset(req, 0, sizeof(*req));

@@ -338,7 +235,14 @@ static int enc_payload(struct snp_guest_dev *snp_dev, u64 seqno, int version, u8
dev_dbg(snp_dev->dev, "request [seqno %lld type %d version %d sz %d]\n",
hdr->msg_seqno, hdr->msg_type, hdr->msg_version, hdr->msg_sz);

- return __enc_payload(snp_dev, req, payload, sz);
+ if (WARN_ON((sz + ctx->authsize) > sizeof(req->payload)))
+ return -EBADMSG;
+
+ memcpy(iv, &hdr->msg_seqno, sizeof(hdr->msg_seqno));
+ aesgcm_encrypt(ctx, req->payload, payload, sz, &hdr->algo, AAD_LEN,
+ iv, hdr->authtag);
+
+ return 0;
}

static int __handle_guest_request(struct snp_guest_dev *snp_dev, u64 exit_code,
@@ -486,7 +390,6 @@ struct snp_req_resp {

static int get_report(struct snp_guest_dev *snp_dev, struct snp_guest_request_ioctl *arg)
{
- struct snp_guest_crypto *crypto = snp_dev->crypto;
struct snp_report_req *req = &snp_dev->req.report;
struct snp_report_resp *resp;
int rc, resp_len;
@@ -504,7 +407,7 @@ static int get_report(struct snp_guest_dev *snp_dev, struct snp_guest_request_io
* response payload. Make sure that it has enough space to cover the
* authtag.
*/
- resp_len = sizeof(resp->data) + crypto->a_len;
+ resp_len = sizeof(resp->data) + snp_dev->ctx->authsize;
resp = kzalloc(resp_len, GFP_KERNEL_ACCOUNT);
if (!resp)
return -ENOMEM;
@@ -526,7 +429,6 @@ static int get_report(struct snp_guest_dev *snp_dev, struct snp_guest_request_io
static int get_derived_key(struct snp_guest_dev *snp_dev, struct snp_guest_request_ioctl *arg)
{
struct snp_derived_key_req *req = &snp_dev->req.derived_key;
- struct snp_guest_crypto *crypto = snp_dev->crypto;
struct snp_derived_key_resp resp = {0};
int rc, resp_len;
/* Response data is 64 bytes and max authsize for GCM is 16 bytes. */
@@ -542,7 +444,7 @@ static int get_derived_key(struct snp_guest_dev *snp_dev, struct snp_guest_reque
* response payload. Make sure that it has enough space to cover the
* authtag.
*/
- resp_len = sizeof(resp.data) + crypto->a_len;
+ resp_len = sizeof(resp.data) + snp_dev->ctx->authsize;
if (sizeof(buf) < resp_len)
return -ENOMEM;

@@ -569,7 +471,6 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques

{
struct snp_ext_report_req *req = &snp_dev->req.ext_report;
- struct snp_guest_crypto *crypto = snp_dev->crypto;
struct snp_report_resp *resp;
int ret, npages = 0, resp_len;
sockptr_t certs_address;
@@ -612,7 +513,7 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques
* response payload. Make sure that it has enough space to cover the
* authtag.
*/
- resp_len = sizeof(resp->data) + crypto->a_len;
+ resp_len = sizeof(resp->data) + snp_dev->ctx->authsize;
resp = kzalloc(resp_len, GFP_KERNEL_ACCOUNT);
if (!resp)
return -ENOMEM;
@@ -954,8 +855,8 @@ static int __init sev_guest_probe(struct platform_device *pdev)
goto e_free_response;

ret = -EIO;
- snp_dev->crypto = init_crypto(snp_dev, snp_dev->vmpck, VMPCK_KEY_LEN);
- if (!snp_dev->crypto)
+ snp_dev->ctx = snp_init_crypto(snp_dev->vmpck, VMPCK_KEY_LEN);
+ if (!snp_dev->ctx)
goto e_free_cert_data;

misc = &snp_dev->misc;
@@ -978,11 +879,13 @@ static int __init sev_guest_probe(struct platform_device *pdev)

ret = misc_register(misc);
if (ret)
- goto e_free_cert_data;
+ goto e_free_ctx;

dev_info(dev, "Initialized SEV guest driver (using vmpck_id %d)\n", vmpck_id);
return 0;

+e_free_ctx:
+ kfree(snp_dev->ctx);
e_free_cert_data:
free_shared_pages(snp_dev->certs_data, SEV_FW_BLOB_MAX_SIZE);
e_free_response:
@@ -1001,7 +904,7 @@ static int __exit sev_guest_remove(struct platform_device *pdev)
free_shared_pages(snp_dev->certs_data, SEV_FW_BLOB_MAX_SIZE);
free_shared_pages(snp_dev->response, sizeof(struct snp_guest_msg));
free_shared_pages(snp_dev->request, sizeof(struct snp_guest_msg));
- deinit_crypto(snp_dev->crypto);
+ kfree(snp_dev->ctx);
misc_deregister(&snp_dev->misc);

return 0;
diff --git a/drivers/virt/coco/sev-guest/sev-guest.h b/drivers/virt/coco/sev-guest/sev-guest.h
index 21bda26fdb95..ceb798a404d6 100644
--- a/drivers/virt/coco/sev-guest/sev-guest.h
+++ b/drivers/virt/coco/sev-guest/sev-guest.h
@@ -13,6 +13,9 @@
#include <linux/types.h>

#define MAX_AUTHTAG_LEN 32
+#define AUTHTAG_LEN 16
+#define AAD_LEN 48
+#define MSG_HDR_VER 1

/* See SNP spec SNP_GUEST_REQUEST section for the structure */
enum msg_type {
--
2.34.1

2023-11-28 13:01:27

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 03/16] virt: sev-guest: Replace dev_dbg with pr_debug

In preparation of moving code to arch/x86/kernel/sev.c,
replace dev_dbg with pr_debug.

Signed-off-by: Nikunj A Dadhania <[email protected]>
Reviewed-by: Tom Lendacky <[email protected]>
---
drivers/virt/coco/sev-guest/sev-guest.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c
index 8382fd657e67..917c19e9e5ed 100644
--- a/drivers/virt/coco/sev-guest/sev-guest.c
+++ b/drivers/virt/coco/sev-guest/sev-guest.c
@@ -178,8 +178,9 @@ static int verify_and_dec_payload(struct snp_guest_dev *snp_dev, void *payload,
struct aesgcm_ctx *ctx = snp_dev->ctx;
u8 iv[GCM_AES_IV_SIZE] = {};

- dev_dbg(snp_dev->dev, "response [seqno %lld type %d version %d sz %d]\n",
- resp_hdr->msg_seqno, resp_hdr->msg_type, resp_hdr->msg_version, resp_hdr->msg_sz);
+ pr_debug("response [seqno %lld type %d version %d sz %d]\n",
+ resp_hdr->msg_seqno, resp_hdr->msg_type, resp_hdr->msg_version,
+ resp_hdr->msg_sz);

/* Copy response from shared memory to encrypted memory. */
memcpy(resp, snp_dev->response, sizeof(*resp));
@@ -232,8 +233,8 @@ static int enc_payload(struct snp_guest_dev *snp_dev, u64 seqno, int version, u8
if (!hdr->msg_seqno)
return -ENOSR;

- dev_dbg(snp_dev->dev, "request [seqno %lld type %d version %d sz %d]\n",
- hdr->msg_seqno, hdr->msg_type, hdr->msg_version, hdr->msg_sz);
+ pr_debug("request [seqno %lld type %d version %d sz %d]\n",
+ hdr->msg_seqno, hdr->msg_type, hdr->msg_version, hdr->msg_sz);

if (WARN_ON((sz + ctx->authsize) > sizeof(req->payload)))
return -EBADMSG;
--
2.34.1

2023-11-28 13:01:51

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 05/16] virt: sev-guest: Add vmpck_id to snp_guest_dev struct

Drop vmpck and os_area_msg_seqno pointers so that secret page layout
does not need to be exposed to the sev-guest driver after the rework.
Instead, add helper APIs to access vmpck and os_area_msg_seqno when
needed.

Also, change function is_vmpck_empty() to snp_is_vmpck_empty() in
preparation for moving to sev.c.

Signed-off-by: Nikunj A Dadhania <[email protected]>
Reviewed-by: Tom Lendacky <[email protected]>
---
drivers/virt/coco/sev-guest/sev-guest.c | 95 ++++++++++++-------------
1 file changed, 47 insertions(+), 48 deletions(-)

diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c
index 1579140d43ec..0f2134deca51 100644
--- a/drivers/virt/coco/sev-guest/sev-guest.c
+++ b/drivers/virt/coco/sev-guest/sev-guest.c
@@ -59,22 +59,29 @@ struct snp_guest_dev {
struct snp_derived_key_req derived_key;
struct snp_ext_report_req ext_report;
} req;
- u32 *os_area_msg_seqno;
- u8 *vmpck;
+ unsigned int vmpck_id;
};

static u32 vmpck_id;
module_param(vmpck_id, uint, 0444);
MODULE_PARM_DESC(vmpck_id, "The VMPCK ID to use when communicating with the PSP.");

-static bool is_vmpck_empty(struct snp_guest_dev *snp_dev)
+static inline u8 *snp_get_vmpck(struct snp_guest_dev *snp_dev)
{
- char zero_key[VMPCK_KEY_LEN] = {0};
+ return snp_dev->layout->vmpck0 + snp_dev->vmpck_id * VMPCK_KEY_LEN;
+}

- if (snp_dev->vmpck)
- return !memcmp(snp_dev->vmpck, zero_key, VMPCK_KEY_LEN);
+static inline u32 *snp_get_os_area_msg_seqno(struct snp_guest_dev *snp_dev)
+{
+ return &snp_dev->layout->os_area.msg_seqno_0 + snp_dev->vmpck_id;
+}

- return true;
+static bool snp_is_vmpck_empty(struct snp_guest_dev *snp_dev)
+{
+ char zero_key[VMPCK_KEY_LEN] = {0};
+ u8 *key = snp_get_vmpck(snp_dev);
+
+ return !memcmp(key, zero_key, VMPCK_KEY_LEN);
}

/*
@@ -96,20 +103,22 @@ static bool is_vmpck_empty(struct snp_guest_dev *snp_dev)
*/
static void snp_disable_vmpck(struct snp_guest_dev *snp_dev)
{
- dev_alert(snp_dev->dev, "Disabling vmpck_id %d to prevent IV reuse.\n",
- vmpck_id);
- memzero_explicit(snp_dev->vmpck, VMPCK_KEY_LEN);
- snp_dev->vmpck = NULL;
+ u8 *key = snp_get_vmpck(snp_dev);
+
+ dev_alert(snp_dev->dev, "Disabling vmpck_id %u to prevent IV reuse.\n",
+ snp_dev->vmpck_id);
+ memzero_explicit(key, VMPCK_KEY_LEN);
}

static inline u64 __snp_get_msg_seqno(struct snp_guest_dev *snp_dev)
{
+ u32 *os_area_msg_seqno = snp_get_os_area_msg_seqno(snp_dev);
u64 count;

lockdep_assert_held(&snp_dev->cmd_mutex);

/* Read the current message sequence counter from secrets pages */
- count = *snp_dev->os_area_msg_seqno;
+ count = *os_area_msg_seqno;

return count + 1;
}
@@ -137,11 +146,13 @@ static u64 snp_get_msg_seqno(struct snp_guest_dev *snp_dev)

static void snp_inc_msg_seqno(struct snp_guest_dev *snp_dev)
{
+ u32 *os_area_msg_seqno = snp_get_os_area_msg_seqno(snp_dev);
+
/*
* The counter is also incremented by the PSP, so increment it by 2
* and save in secrets page.
*/
- *snp_dev->os_area_msg_seqno += 2;
+ *os_area_msg_seqno += 2;
}

static inline struct snp_guest_dev *to_snp_dev(struct file *file)
@@ -151,15 +162,22 @@ static inline struct snp_guest_dev *to_snp_dev(struct file *file)
return container_of(dev, struct snp_guest_dev, misc);
}

-static struct aesgcm_ctx *snp_init_crypto(u8 *key, size_t keylen)
+static struct aesgcm_ctx *snp_init_crypto(struct snp_guest_dev *snp_dev)
{
struct aesgcm_ctx *ctx;
+ u8 *key;
+
+ if (snp_is_vmpck_empty(snp_dev)) {
+ pr_err("VM communication key VMPCK%u is null\n", vmpck_id);
+ return NULL;
+ }

ctx = kzalloc(sizeof(*ctx), GFP_KERNEL_ACCOUNT);
if (!ctx)
return NULL;

- if (aesgcm_expandkey(ctx, key, keylen, AUTHTAG_LEN)) {
+ key = snp_get_vmpck(snp_dev);
+ if (aesgcm_expandkey(ctx, key, VMPCK_KEY_LEN, AUTHTAG_LEN)) {
pr_err("Crypto context initialization failed\n");
kfree(ctx);
return NULL;
@@ -589,7 +607,7 @@ static long snp_guest_ioctl(struct file *file, unsigned int ioctl, unsigned long
mutex_lock(&snp_dev->cmd_mutex);

/* Check if the VMPCK is not empty */
- if (is_vmpck_empty(snp_dev)) {
+ if (snp_is_vmpck_empty(snp_dev)) {
dev_err_ratelimited(snp_dev->dev, "VMPCK is disabled\n");
mutex_unlock(&snp_dev->cmd_mutex);
return -ENOTTY;
@@ -666,32 +684,14 @@ static const struct file_operations snp_guest_fops = {
.unlocked_ioctl = snp_guest_ioctl,
};

-static u8 *get_vmpck(int id, struct snp_secrets_page_layout *layout, u32 **seqno)
+bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id)
{
- u8 *key = NULL;
+ if (WARN_ON(vmpck_id > 3))
+ return false;

- switch (id) {
- case 0:
- *seqno = &layout->os_area.msg_seqno_0;
- key = layout->vmpck0;
- break;
- case 1:
- *seqno = &layout->os_area.msg_seqno_1;
- key = layout->vmpck1;
- break;
- case 2:
- *seqno = &layout->os_area.msg_seqno_2;
- key = layout->vmpck2;
- break;
- case 3:
- *seqno = &layout->os_area.msg_seqno_3;
- key = layout->vmpck3;
- break;
- default:
- break;
- }
+ dev->vmpck_id = vmpck_id;

- return key;
+ return true;
}

struct snp_msg_report_resp_hdr {
@@ -727,7 +727,7 @@ static int sev_report_new(struct tsm_report *report, void *data)
guard(mutex)(&snp_dev->cmd_mutex);

/* Check if the VMPCK is not empty */
- if (is_vmpck_empty(snp_dev)) {
+ if (snp_is_vmpck_empty(snp_dev)) {
dev_err_ratelimited(snp_dev->dev, "VMPCK is disabled\n");
return -ENOTTY;
}
@@ -847,22 +847,21 @@ static int __init sev_guest_probe(struct platform_device *pdev)
goto e_unmap;

ret = -EINVAL;
- snp_dev->vmpck = get_vmpck(vmpck_id, layout, &snp_dev->os_area_msg_seqno);
- if (!snp_dev->vmpck) {
- dev_err(dev, "invalid vmpck id %d\n", vmpck_id);
+ snp_dev->layout = layout;
+ if (!snp_assign_vmpck(snp_dev, vmpck_id)) {
+ dev_err(dev, "invalid vmpck id %u\n", vmpck_id);
goto e_unmap;
}

/* Verify that VMPCK is not zero. */
- if (is_vmpck_empty(snp_dev)) {
- dev_err(dev, "vmpck id %d is null\n", vmpck_id);
+ if (snp_is_vmpck_empty(snp_dev)) {
+ dev_err(dev, "vmpck id %u is null\n", vmpck_id);
goto e_unmap;
}

mutex_init(&snp_dev->cmd_mutex);
platform_set_drvdata(pdev, snp_dev);
snp_dev->dev = dev;
- snp_dev->layout = layout;

/* Allocate the shared page used for the request and response message. */
snp_dev->request = alloc_shared_pages(dev, sizeof(struct snp_guest_msg));
@@ -878,7 +877,7 @@ static int __init sev_guest_probe(struct platform_device *pdev)
goto e_free_response;

ret = -EIO;
- snp_dev->ctx = snp_init_crypto(snp_dev->vmpck, VMPCK_KEY_LEN);
+ snp_dev->ctx = snp_init_crypto(snp_dev);
if (!snp_dev->ctx)
goto e_free_cert_data;

@@ -903,7 +902,7 @@ static int __init sev_guest_probe(struct platform_device *pdev)
if (ret)
goto e_free_ctx;

- dev_info(dev, "Initialized SEV guest driver (using vmpck_id %d)\n", vmpck_id);
+ dev_info(dev, "Initialized SEV guest driver (using vmpck_id %u)\n", vmpck_id);
return 0;

e_free_ctx:
--
2.34.1

2023-11-28 13:02:09

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 04/16] virt: sev-guest: Add SNP guest request structure

Add a snp_guest_req structure to simplify the function arguments. The
structure will be used to call the SNP Guest message request API
instead of passing a long list of parameters.

Update snp_issue_guest_request() prototype to include the new guest request
structure and move the prototype to sev_guest.h.

Signed-off-by: Nikunj A Dadhania <[email protected]>
---
.../x86/include/asm}/sev-guest.h | 18 +++
arch/x86/include/asm/sev.h | 8 --
arch/x86/kernel/sev.c | 15 ++-
drivers/virt/coco/sev-guest/sev-guest.c | 108 +++++++++++-------
4 files changed, 93 insertions(+), 56 deletions(-)
rename {drivers/virt/coco/sev-guest => arch/x86/include/asm}/sev-guest.h (78%)

diff --git a/drivers/virt/coco/sev-guest/sev-guest.h b/arch/x86/include/asm/sev-guest.h
similarity index 78%
rename from drivers/virt/coco/sev-guest/sev-guest.h
rename to arch/x86/include/asm/sev-guest.h
index ceb798a404d6..27cc15ad6131 100644
--- a/drivers/virt/coco/sev-guest/sev-guest.h
+++ b/arch/x86/include/asm/sev-guest.h
@@ -63,4 +63,22 @@ struct snp_guest_msg {
u8 payload[4000];
} __packed;

+struct snp_guest_req {
+ void *req_buf;
+ size_t req_sz;
+
+ void *resp_buf;
+ size_t resp_sz;
+
+ void *data;
+ size_t data_npages;
+
+ u64 exit_code;
+ unsigned int vmpck_id;
+ u8 msg_version;
+ u8 msg_type;
+};
+
+int snp_issue_guest_request(struct snp_guest_req *req, struct snp_req_data *input,
+ struct snp_guest_request_ioctl *rio);
#endif /* __VIRT_SEVGUEST_H__ */
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 5b4a1ce3d368..78465a8c7dc6 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -97,8 +97,6 @@ extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
struct snp_req_data {
unsigned long req_gpa;
unsigned long resp_gpa;
- unsigned long data_gpa;
- unsigned int data_npages;
};

struct sev_guest_platform_data {
@@ -209,7 +207,6 @@ void snp_set_memory_private(unsigned long vaddr, unsigned long npages);
void snp_set_wakeup_secondary_cpu(void);
bool snp_init(struct boot_params *bp);
void __init __noreturn snp_abort(void);
-int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct snp_guest_request_ioctl *rio);
void snp_accept_memory(phys_addr_t start, phys_addr_t end);
u64 snp_get_unsupported_features(u64 status);
u64 sev_get_status(void);
@@ -233,11 +230,6 @@ static inline void snp_set_memory_private(unsigned long vaddr, unsigned long npa
static inline void snp_set_wakeup_secondary_cpu(void) { }
static inline bool snp_init(struct boot_params *bp) { return false; }
static inline void snp_abort(void) { }
-static inline int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct snp_guest_request_ioctl *rio)
-{
- return -ENOTTY;
-}
-
static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { }
static inline u64 snp_get_unsupported_features(u64 status) { return 0; }
static inline u64 sev_get_status(void) { return 0; }
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 70472eebe719..01a400681529 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -28,6 +28,7 @@
#include <asm/cpu_entry_area.h>
#include <asm/stacktrace.h>
#include <asm/sev.h>
+#include <asm/sev-guest.h>
#include <asm/insn-eval.h>
#include <asm/fpu/xcr.h>
#include <asm/processor.h>
@@ -2167,15 +2168,21 @@ static int __init init_sev_config(char *str)
}
__setup("sev=", init_sev_config);

-int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct snp_guest_request_ioctl *rio)
+int snp_issue_guest_request(struct snp_guest_req *req, struct snp_req_data *input,
+ struct snp_guest_request_ioctl *rio)
{
struct ghcb_state state;
struct es_em_ctxt ctxt;
unsigned long flags;
struct ghcb *ghcb;
+ u64 exit_code;
int ret;

rio->exitinfo2 = SEV_RET_NO_FW_CALL;
+ if (!req)
+ return -EINVAL;
+
+ exit_code = req->exit_code;

/*
* __sev_get_ghcb() needs to run with IRQs disabled because it is using
@@ -2192,8 +2199,8 @@ int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct sn
vc_ghcb_invalidate(ghcb);

if (exit_code == SVM_VMGEXIT_EXT_GUEST_REQUEST) {
- ghcb_set_rax(ghcb, input->data_gpa);
- ghcb_set_rbx(ghcb, input->data_npages);
+ ghcb_set_rax(ghcb, __pa(req->data));
+ ghcb_set_rbx(ghcb, req->data_npages);
}

ret = sev_es_ghcb_hv_call(ghcb, &ctxt, exit_code, input->req_gpa, input->resp_gpa);
@@ -2212,7 +2219,7 @@ int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct sn
case SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN):
/* Number of expected pages are returned in RBX */
if (exit_code == SVM_VMGEXIT_EXT_GUEST_REQUEST) {
- input->data_npages = ghcb_get_rbx(ghcb);
+ req->data_npages = ghcb_get_rbx(ghcb);
ret = -ENOSPC;
break;
}
diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c
index 917c19e9e5ed..1579140d43ec 100644
--- a/drivers/virt/coco/sev-guest/sev-guest.c
+++ b/drivers/virt/coco/sev-guest/sev-guest.c
@@ -27,8 +27,7 @@

#include <asm/svm.h>
#include <asm/sev.h>
-
-#include "sev-guest.h"
+#include <asm/sev-guest.h>

#define DEVICE_NAME "sev-guest"

@@ -169,7 +168,7 @@ static struct aesgcm_ctx *snp_init_crypto(u8 *key, size_t keylen)
return ctx;
}

-static int verify_and_dec_payload(struct snp_guest_dev *snp_dev, void *payload, u32 sz)
+static int verify_and_dec_payload(struct snp_guest_dev *snp_dev, struct snp_guest_req *guest_req)
{
struct snp_guest_msg *resp = &snp_dev->secret_response;
struct snp_guest_msg *req = &snp_dev->secret_request;
@@ -198,36 +197,35 @@ static int verify_and_dec_payload(struct snp_guest_dev *snp_dev, void *payload,
* If the message size is greater than our buffer length then return
* an error.
*/
- if (unlikely((resp_hdr->msg_sz + ctx->authsize) > sz))
+ if (unlikely((resp_hdr->msg_sz + ctx->authsize) > guest_req->resp_sz))
return -EBADMSG;

/* Decrypt the payload */
memcpy(iv, &resp_hdr->msg_seqno, sizeof(resp_hdr->msg_seqno));
- if (!aesgcm_decrypt(ctx, payload, resp->payload, resp_hdr->msg_sz,
+ if (!aesgcm_decrypt(ctx, guest_req->resp_buf, resp->payload, resp_hdr->msg_sz,
&resp_hdr->algo, AAD_LEN, iv, resp_hdr->authtag))
return -EBADMSG;

return 0;
}

-static int enc_payload(struct snp_guest_dev *snp_dev, u64 seqno, int version, u8 type,
- void *payload, size_t sz)
+static int enc_payload(struct snp_guest_dev *snp_dev, u64 seqno, struct snp_guest_req *req)
{
- struct snp_guest_msg *req = &snp_dev->secret_request;
- struct snp_guest_msg_hdr *hdr = &req->hdr;
+ struct snp_guest_msg *msg = &snp_dev->secret_request;
+ struct snp_guest_msg_hdr *hdr = &msg->hdr;
struct aesgcm_ctx *ctx = snp_dev->ctx;
u8 iv[GCM_AES_IV_SIZE] = {};

- memset(req, 0, sizeof(*req));
+ memset(msg, 0, sizeof(*msg));

hdr->algo = SNP_AEAD_AES_256_GCM;
hdr->hdr_version = MSG_HDR_VER;
hdr->hdr_sz = sizeof(*hdr);
- hdr->msg_type = type;
- hdr->msg_version = version;
+ hdr->msg_type = req->msg_type;
+ hdr->msg_version = req->msg_version;
hdr->msg_seqno = seqno;
- hdr->msg_vmpck = vmpck_id;
- hdr->msg_sz = sz;
+ hdr->msg_vmpck = req->vmpck_id;
+ hdr->msg_sz = req->req_sz;

/* Verify the sequence number is non-zero */
if (!hdr->msg_seqno)
@@ -236,17 +234,17 @@ static int enc_payload(struct snp_guest_dev *snp_dev, u64 seqno, int version, u8
pr_debug("request [seqno %lld type %d version %d sz %d]\n",
hdr->msg_seqno, hdr->msg_type, hdr->msg_version, hdr->msg_sz);

- if (WARN_ON((sz + ctx->authsize) > sizeof(req->payload)))
+ if (WARN_ON((req->req_sz + ctx->authsize) > sizeof(msg->payload)))
return -EBADMSG;

memcpy(iv, &hdr->msg_seqno, sizeof(hdr->msg_seqno));
- aesgcm_encrypt(ctx, req->payload, payload, sz, &hdr->algo, AAD_LEN,
- iv, hdr->authtag);
+ aesgcm_encrypt(ctx, msg->payload, req->req_buf, req->req_sz, &hdr->algo,
+ AAD_LEN, iv, hdr->authtag);

return 0;
}

-static int __handle_guest_request(struct snp_guest_dev *snp_dev, u64 exit_code,
+static int __handle_guest_request(struct snp_guest_dev *snp_dev, struct snp_guest_req *req,
struct snp_guest_request_ioctl *rio)
{
unsigned long req_start = jiffies;
@@ -261,7 +259,7 @@ static int __handle_guest_request(struct snp_guest_dev *snp_dev, u64 exit_code,
* sequence number must be incremented or the VMPCK must be deleted to
* prevent reuse of the IV.
*/
- rc = snp_issue_guest_request(exit_code, &snp_dev->input, rio);
+ rc = snp_issue_guest_request(req, &snp_dev->input, rio);
switch (rc) {
case -ENOSPC:
/*
@@ -271,8 +269,8 @@ static int __handle_guest_request(struct snp_guest_dev *snp_dev, u64 exit_code,
* order to increment the sequence number and thus avoid
* IV reuse.
*/
- override_npages = snp_dev->input.data_npages;
- exit_code = SVM_VMGEXIT_GUEST_REQUEST;
+ override_npages = req->data_npages;
+ req->exit_code = SVM_VMGEXIT_GUEST_REQUEST;

/*
* Override the error to inform callers the given extended
@@ -327,15 +325,13 @@ static int __handle_guest_request(struct snp_guest_dev *snp_dev, u64 exit_code,
}

if (override_npages)
- snp_dev->input.data_npages = override_npages;
+ req->data_npages = override_npages;

return rc;
}

-static int handle_guest_request(struct snp_guest_dev *snp_dev, u64 exit_code,
- struct snp_guest_request_ioctl *rio, u8 type,
- void *req_buf, size_t req_sz, void *resp_buf,
- u32 resp_sz)
+static int snp_send_guest_request(struct snp_guest_dev *snp_dev, struct snp_guest_req *req,
+ struct snp_guest_request_ioctl *rio)
{
u64 seqno;
int rc;
@@ -349,7 +345,7 @@ static int handle_guest_request(struct snp_guest_dev *snp_dev, u64 exit_code,
memset(snp_dev->response, 0, sizeof(struct snp_guest_msg));

/* Encrypt the userspace provided payload in snp_dev->secret_request. */
- rc = enc_payload(snp_dev, seqno, rio->msg_version, type, req_buf, req_sz);
+ rc = enc_payload(snp_dev, seqno, req);
if (rc)
return rc;

@@ -360,7 +356,7 @@ static int handle_guest_request(struct snp_guest_dev *snp_dev, u64 exit_code,
memcpy(snp_dev->request, &snp_dev->secret_request,
sizeof(snp_dev->secret_request));

- rc = __handle_guest_request(snp_dev, exit_code, rio);
+ rc = __handle_guest_request(snp_dev, req, rio);
if (rc) {
if (rc == -EIO &&
rio->exitinfo2 == SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN))
@@ -369,12 +365,11 @@ static int handle_guest_request(struct snp_guest_dev *snp_dev, u64 exit_code,
dev_alert(snp_dev->dev,
"Detected error from ASP request. rc: %d, exitinfo2: 0x%llx\n",
rc, rio->exitinfo2);
-
snp_disable_vmpck(snp_dev);
return rc;
}

- rc = verify_and_dec_payload(snp_dev, resp_buf, resp_sz);
+ rc = verify_and_dec_payload(snp_dev, req);
if (rc) {
dev_alert(snp_dev->dev, "Detected unexpected decode failure from ASP. rc: %d\n", rc);
snp_disable_vmpck(snp_dev);
@@ -392,6 +387,7 @@ struct snp_req_resp {
static int get_report(struct snp_guest_dev *snp_dev, struct snp_guest_request_ioctl *arg)
{
struct snp_report_req *req = &snp_dev->req.report;
+ struct snp_guest_req guest_req = {0};
struct snp_report_resp *resp;
int rc, resp_len;

@@ -413,9 +409,16 @@ static int get_report(struct snp_guest_dev *snp_dev, struct snp_guest_request_io
if (!resp)
return -ENOMEM;

- rc = handle_guest_request(snp_dev, SVM_VMGEXIT_GUEST_REQUEST, arg,
- SNP_MSG_REPORT_REQ, req, sizeof(*req), resp->data,
- resp_len);
+ guest_req.msg_version = arg->msg_version;
+ guest_req.msg_type = SNP_MSG_REPORT_REQ;
+ guest_req.vmpck_id = vmpck_id;
+ guest_req.req_buf = req;
+ guest_req.req_sz = sizeof(*req);
+ guest_req.resp_buf = resp->data;
+ guest_req.resp_sz = resp_len;
+ guest_req.exit_code = SVM_VMGEXIT_GUEST_REQUEST;
+
+ rc = snp_send_guest_request(snp_dev, &guest_req, arg);
if (rc)
goto e_free;

@@ -431,6 +434,7 @@ static int get_derived_key(struct snp_guest_dev *snp_dev, struct snp_guest_reque
{
struct snp_derived_key_req *req = &snp_dev->req.derived_key;
struct snp_derived_key_resp resp = {0};
+ struct snp_guest_req guest_req = {0};
int rc, resp_len;
/* Response data is 64 bytes and max authsize for GCM is 16 bytes. */
u8 buf[64 + 16];
@@ -452,8 +456,16 @@ static int get_derived_key(struct snp_guest_dev *snp_dev, struct snp_guest_reque
if (copy_from_user(req, (void __user *)arg->req_data, sizeof(*req)))
return -EFAULT;

- rc = handle_guest_request(snp_dev, SVM_VMGEXIT_GUEST_REQUEST, arg,
- SNP_MSG_KEY_REQ, req, sizeof(*req), buf, resp_len);
+ guest_req.msg_version = arg->msg_version;
+ guest_req.msg_type = SNP_MSG_KEY_REQ;
+ guest_req.vmpck_id = vmpck_id;
+ guest_req.req_buf = req;
+ guest_req.req_sz = sizeof(*req);
+ guest_req.resp_buf = buf;
+ guest_req.resp_sz = resp_len;
+ guest_req.exit_code = SVM_VMGEXIT_GUEST_REQUEST;
+
+ rc = snp_send_guest_request(snp_dev, &guest_req, arg);
if (rc)
return rc;

@@ -472,9 +484,10 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques

{
struct snp_ext_report_req *req = &snp_dev->req.ext_report;
+ struct snp_guest_req guest_req = {0};
struct snp_report_resp *resp;
- int ret, npages = 0, resp_len;
sockptr_t certs_address;
+ int ret, resp_len;

lockdep_assert_held(&snp_dev->cmd_mutex);

@@ -507,7 +520,7 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques
* zeros to indicate that certificate data was not provided.
*/
memset(snp_dev->certs_data, 0, req->certs_len);
- npages = req->certs_len >> PAGE_SHIFT;
+ guest_req.data_npages = req->certs_len >> PAGE_SHIFT;
cmd:
/*
* The intermediate response buffer is used while decrypting the
@@ -519,14 +532,21 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques
if (!resp)
return -ENOMEM;

- snp_dev->input.data_npages = npages;
- ret = handle_guest_request(snp_dev, SVM_VMGEXIT_EXT_GUEST_REQUEST, arg,
- SNP_MSG_REPORT_REQ, &req->data,
- sizeof(req->data), resp->data, resp_len);
+ guest_req.msg_version = arg->msg_version;
+ guest_req.msg_type = SNP_MSG_REPORT_REQ;
+ guest_req.vmpck_id = vmpck_id;
+ guest_req.req_buf = &req->data;
+ guest_req.req_sz = sizeof(req->data);
+ guest_req.resp_buf = resp->data;
+ guest_req.resp_sz = resp_len;
+ guest_req.exit_code = SVM_VMGEXIT_EXT_GUEST_REQUEST;
+ guest_req.data = snp_dev->certs_data;
+
+ ret = snp_send_guest_request(snp_dev, &guest_req, arg);

/* If certs length is invalid then copy the returned length */
if (arg->vmm_error == SNP_GUEST_VMM_ERR_INVALID_LEN) {
- req->certs_len = snp_dev->input.data_npages << PAGE_SHIFT;
+ req->certs_len = guest_req.data_npages << PAGE_SHIFT;

if (copy_to_sockptr(io->req_data, req, sizeof(*req)))
ret = -EFAULT;
@@ -535,7 +555,8 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques
if (ret)
goto e_free;

- if (npages && copy_to_sockptr(certs_address, snp_dev->certs_data, req->certs_len)) {
+ if (guest_req.data_npages && req->certs_len &&
+ copy_to_sockptr(certs_address, snp_dev->certs_data, req->certs_len)) {
ret = -EFAULT;
goto e_free;
}
@@ -869,7 +890,6 @@ static int __init sev_guest_probe(struct platform_device *pdev)
/* initial the input address for guest request */
snp_dev->input.req_gpa = __pa(snp_dev->request);
snp_dev->input.resp_gpa = __pa(snp_dev->response);
- snp_dev->input.data_gpa = __pa(snp_dev->certs_data);

ret = tsm_register(&sev_tsm_ops, snp_dev, &tsm_report_extra_type);
if (ret)
--
2.34.1

2023-11-28 13:02:30

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 06/16] x86/sev: Cache the secrets page address

Save the secrets page address during snp_init() from the CC blob. Use
secrets_pa instead of calling get_secrets_page() that remaps the CC
blob for getting the secrets page every time.

Signed-off-by: Nikunj A Dadhania <[email protected]>
Reviewed-by: Tom Lendacky <[email protected]>
---
arch/x86/kernel/sev.c | 52 +++++++++++++------------------------------
1 file changed, 16 insertions(+), 36 deletions(-)

diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 01a400681529..479ea61f40f3 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -72,6 +72,9 @@ static struct ghcb *boot_ghcb __section(".data");
/* Bitmap of SEV features supported by the hypervisor */
static u64 sev_hv_features __ro_after_init;

+/* Secrets page physical address from the CC blob */
+static u64 secrets_pa __ro_after_init;
+
/* #VC handler runtime per-CPU data */
struct sev_es_runtime_data {
struct ghcb ghcb_page;
@@ -598,45 +601,16 @@ void noinstr __sev_es_nmi_complete(void)
__sev_put_ghcb(&state);
}

-static u64 __init get_secrets_page(void)
-{
- u64 pa_data = boot_params.cc_blob_address;
- struct cc_blob_sev_info info;
- void *map;
-
- /*
- * The CC blob contains the address of the secrets page, check if the
- * blob is present.
- */
- if (!pa_data)
- return 0;
-
- map = early_memremap(pa_data, sizeof(info));
- if (!map) {
- pr_err("Unable to locate SNP secrets page: failed to map the Confidential Computing blob.\n");
- return 0;
- }
- memcpy(&info, map, sizeof(info));
- early_memunmap(map, sizeof(info));
-
- /* smoke-test the secrets page passed */
- if (!info.secrets_phys || info.secrets_len != PAGE_SIZE)
- return 0;
-
- return info.secrets_phys;
-}
-
static u64 __init get_snp_jump_table_addr(void)
{
struct snp_secrets_page_layout *layout;
void __iomem *mem;
- u64 pa, addr;
+ u64 addr;

- pa = get_secrets_page();
- if (!pa)
+ if (!secrets_pa)
return 0;

- mem = ioremap_encrypted(pa, PAGE_SIZE);
+ mem = ioremap_encrypted(secrets_pa, PAGE_SIZE);
if (!mem) {
pr_err("Unable to locate AP jump table address: failed to map the SNP secrets page.\n");
return 0;
@@ -2083,6 +2057,12 @@ static __init struct cc_blob_sev_info *find_cc_blob(struct boot_params *bp)
return cc_info;
}

+static void __init set_secrets_pa(const struct cc_blob_sev_info *cc_info)
+{
+ if (cc_info && cc_info->secrets_phys && cc_info->secrets_len == PAGE_SIZE)
+ secrets_pa = cc_info->secrets_phys;
+}
+
bool __init snp_init(struct boot_params *bp)
{
struct cc_blob_sev_info *cc_info;
@@ -2094,6 +2074,8 @@ bool __init snp_init(struct boot_params *bp)
if (!cc_info)
return false;

+ set_secrets_pa(cc_info);
+
setup_cpuid_table(cc_info);

/*
@@ -2246,16 +2228,14 @@ static struct platform_device sev_guest_device = {
static int __init snp_init_platform_device(void)
{
struct sev_guest_platform_data data;
- u64 gpa;

if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
return -ENODEV;

- gpa = get_secrets_page();
- if (!gpa)
+ if (!secrets_pa)
return -ENODEV;

- data.secrets_gpa = gpa;
+ data.secrets_gpa = secrets_pa;
if (platform_device_add_data(&sev_guest_device, &data, sizeof(data)))
return -ENODEV;

--
2.34.1

2023-11-28 13:02:38

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 09/16] x86/cpufeatures: Add synthetic Secure TSC bit

Add support for the synthetic CPUID flag which indicates that the SNP
guest is running with secure tsc enabled (MSR_AMD64_SEV Bit 11 -
SecureTsc_Enabled) . This flag is there so that this capability in the
guests can be detected easily without reading MSRs every time accessors.

Suggested-by: Kirill A. Shutemov <[email protected]>
Signed-off-by: Nikunj A Dadhania <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 4af140cf5719..e9dafc8cd9dc 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -237,6 +237,7 @@
#define X86_FEATURE_PVUNLOCK ( 8*32+20) /* "" PV unlock function */
#define X86_FEATURE_VCPUPREEMPT ( 8*32+21) /* "" PV vcpu_is_preempted function */
#define X86_FEATURE_TDX_GUEST ( 8*32+22) /* Intel Trust Domain Extensions Guest */
+#define X86_FEATURE_SNP_SECURE_TSC ( 8*32+23) /* "" AMD SNP Secure TSC */

/* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */
#define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/
--
2.34.1

2023-11-28 13:02:48

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 10/16] x86/sev: Add Secure TSC support for SNP guests

Add support for Secure TSC in SNP enabled guests. Secure TSC allows
guest to securely use RDTSC/RDTSCP instructions as the parameters
being used cannot be changed by hypervisor once the guest is launched.

During the boot-up of the secondary cpus, SecureTSC enabled guests
need to query TSC info from AMD Security Processor. This communication
channel is encrypted between the AMD Security Processor and the guest,
the hypervisor is just the conduit to deliver the guest messages to
the AMD Security Processor. Each message is protected with an
AEAD (AES-256 GCM). Use minimal AES GCM library to encrypt/decrypt SNP
Guest messages to communicate with the PSP.

Use the guest enc_init hook to fetch SNP TSC info from the AMD Security
Processor and initialize the snp_tsc_scale and snp_tsc_offset. During
secondary CPU initialization set VMSA fields GUEST_TSC_SCALE (offset 2F0h)
and GUEST_TSC_OFFSET(offset 2F8h) with snp_tsc_scale and snp_tsc_offset
respectively.

Signed-off-by: Nikunj A Dadhania <[email protected]>
---
arch/x86/include/asm/sev-common.h | 1 +
arch/x86/include/asm/sev-guest.h | 20 +++++++
arch/x86/include/asm/sev.h | 2 +
arch/x86/include/asm/svm.h | 6 ++-
arch/x86/kernel/sev.c | 88 +++++++++++++++++++++++++++++++
arch/x86/mm/mem_encrypt_amd.c | 6 +++
6 files changed, 121 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index b463fcbd4b90..6adc8e27feeb 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -159,6 +159,7 @@ struct snp_psc_desc {
#define GHCB_TERM_NOT_VMPL0 3 /* SNP guest is not running at VMPL-0 */
#define GHCB_TERM_CPUID 4 /* CPUID-validation failure */
#define GHCB_TERM_CPUID_HV 5 /* CPUID failure during hypervisor fallback */
+#define GHCB_TERM_SECURE_TSC 6 /* Secure TSC initialization failed */

#define GHCB_RESP_CODE(v) ((v) & GHCB_MSR_INFO_MASK)

diff --git a/arch/x86/include/asm/sev-guest.h b/arch/x86/include/asm/sev-guest.h
index 16bf25c14e6f..b23051e6b39e 100644
--- a/arch/x86/include/asm/sev-guest.h
+++ b/arch/x86/include/asm/sev-guest.h
@@ -39,6 +39,8 @@ enum msg_type {
SNP_MSG_ABSORB_RSP,
SNP_MSG_VMRK_REQ,
SNP_MSG_VMRK_RSP,
+ SNP_MSG_TSC_INFO_REQ = 17,
+ SNP_MSG_TSC_INFO_RSP,

SNP_MSG_TYPE_MAX
};
@@ -83,6 +85,23 @@ struct sev_guest_platform_data {
struct snp_req_data input;
};

+#define SNP_TSC_INFO_REQ_SZ 128
+
+struct snp_tsc_info_req {
+ /* Must be zero filled */
+ u8 rsvd[SNP_TSC_INFO_REQ_SZ];
+} __packed;
+
+struct snp_tsc_info_resp {
+ /* Status of TSC_INFO message */
+ u32 status;
+ u32 rsvd1;
+ u64 tsc_scale;
+ u64 tsc_offset;
+ u32 tsc_factor;
+ u8 rsvd2[100];
+} __packed;
+
struct snp_guest_dev {
struct device *dev;
struct miscdevice misc;
@@ -105,6 +124,7 @@ struct snp_guest_dev {
struct snp_report_req report;
struct snp_derived_key_req derived_key;
struct snp_ext_report_req ext_report;
+ struct snp_tsc_info_req tsc_info;
} req;
unsigned int vmpck_id;
};
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 783150458864..038a5a15d937 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -200,6 +200,7 @@ void __init __noreturn snp_abort(void);
void snp_accept_memory(phys_addr_t start, phys_addr_t end);
u64 snp_get_unsupported_features(u64 status);
u64 sev_get_status(void);
+void __init snp_secure_tsc_prepare(void);
#else
static inline void sev_es_ist_enter(struct pt_regs *regs) { }
static inline void sev_es_ist_exit(void) { }
@@ -223,6 +224,7 @@ static inline void snp_abort(void) { }
static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { }
static inline u64 snp_get_unsupported_features(u64 status) { return 0; }
static inline u64 sev_get_status(void) { return 0; }
+static inline void __init snp_secure_tsc_prepare(void) { }
#endif

#endif
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 87a7b917d30e..3a8294bbd109 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -410,7 +410,9 @@ struct sev_es_save_area {
u8 reserved_0x298[80];
u32 pkru;
u32 tsc_aux;
- u8 reserved_0x2f0[24];
+ u64 tsc_scale;
+ u64 tsc_offset;
+ u8 reserved_0x300[8];
u64 rcx;
u64 rdx;
u64 rbx;
@@ -542,7 +544,7 @@ static inline void __unused_size_checks(void)
BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x1c0);
BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x248);
BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x298);
- BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x2f0);
+ BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x300);
BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x320);
BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x380);
BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0x3f0);
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index a413add2fd2c..1cb6c66d1601 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -76,6 +76,10 @@ static u64 sev_hv_features __ro_after_init;
/* Secrets page physical address from the CC blob */
static u64 secrets_pa __ro_after_init;

+/* Secure TSC values read using TSC_INFO SNP Guest request */
+static u64 snp_tsc_scale __ro_after_init;
+static u64 snp_tsc_offset __ro_after_init;
+
/* #VC handler runtime per-CPU data */
struct sev_es_runtime_data {
struct ghcb ghcb_page;
@@ -942,6 +946,84 @@ static void snp_cleanup_vmsa(struct sev_es_save_area *vmsa)
free_page((unsigned long)vmsa);
}

+static struct snp_guest_dev tsc_snp_dev __initdata;
+
+static int __init snp_get_tsc_info(void)
+{
+ struct snp_tsc_info_req *tsc_req = &tsc_snp_dev.req.tsc_info;
+ static u8 buf[SNP_TSC_INFO_REQ_SZ + AUTHTAG_LEN];
+ struct snp_guest_request_ioctl rio;
+ struct snp_tsc_info_resp tsc_resp;
+ struct snp_guest_req req;
+ int rc, resp_len;
+
+ /*
+ * The intermediate response buffer is used while decrypting the
+ * response payload. Make sure that it has enough space to cover the
+ * authtag.
+ */
+ resp_len = sizeof(tsc_resp) + AUTHTAG_LEN;
+ if (sizeof(buf) < resp_len)
+ return -EINVAL;
+
+ memset(tsc_req, 0, sizeof(*tsc_req));
+ memset(&req, 0, sizeof(req));
+ memset(&rio, 0, sizeof(rio));
+ memset(buf, 0, sizeof(buf));
+
+ mutex_init(&tsc_snp_dev.cmd_mutex);
+ if (!snp_assign_vmpck(&tsc_snp_dev, 0))
+ return -EINVAL;
+
+ /* Initialize the PSP channel to send snp messages */
+ rc = snp_setup_psp_messaging(&tsc_snp_dev);
+ if (rc)
+ return rc;
+
+ req.msg_version = MSG_HDR_VER;
+ req.msg_type = SNP_MSG_TSC_INFO_REQ;
+ req.vmpck_id = tsc_snp_dev.vmpck_id;
+ req.req_buf = tsc_req;
+ req.req_sz = sizeof(*tsc_req);
+ req.resp_buf = buf;
+ req.resp_sz = resp_len;
+ req.exit_code = SVM_VMGEXIT_GUEST_REQUEST;
+
+ mutex_lock(&tsc_snp_dev.cmd_mutex);
+ rc = snp_send_guest_request(&tsc_snp_dev, &req, &rio);
+ if (rc)
+ goto err_req;
+
+ memcpy(&tsc_resp, buf, sizeof(tsc_resp));
+ pr_debug("%s: Valid response status %x scale %llx offset %llx factor %x\n",
+ __func__, tsc_resp.status, tsc_resp.tsc_scale, tsc_resp.tsc_offset,
+ tsc_resp.tsc_factor);
+
+ snp_tsc_scale = tsc_resp.tsc_scale;
+ snp_tsc_offset = tsc_resp.tsc_offset;
+
+err_req:
+ mutex_unlock(&tsc_snp_dev.cmd_mutex);
+
+ /* The response buffer contains the sensitive data, explicitly clear it. */
+ memzero_explicit(buf, sizeof(buf));
+ memzero_explicit(&tsc_resp, sizeof(tsc_resp));
+ memzero_explicit(&req, sizeof(req));
+
+ return rc;
+}
+
+void __init snp_secure_tsc_prepare(void)
+{
+ if (!cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC))
+ return;
+
+ if (snp_get_tsc_info())
+ sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SECURE_TSC);
+
+ pr_debug("SecureTSC enabled\n");
+}
+
static int wakeup_cpu_via_vmgexit(u32 apic_id, unsigned long start_ip)
{
struct sev_es_save_area *cur_vmsa, *vmsa;
@@ -1042,6 +1124,12 @@ static int wakeup_cpu_via_vmgexit(u32 apic_id, unsigned long start_ip)
vmsa->vmpl = 0;
vmsa->sev_features = sev_status >> 2;

+ /* Setting Secure TSC parameters */
+ if (cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC)) {
+ vmsa->tsc_scale = snp_tsc_scale;
+ vmsa->tsc_offset = snp_tsc_offset;
+ }
+
/* Switch the page over to a VMSA page now that it is initialized */
ret = snp_set_vmsa(vmsa, true);
if (ret) {
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index a68f2dda0948..f561753fc94d 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -213,6 +213,11 @@ void __init sme_map_bootdata(char *real_mode_data)
__sme_early_map_unmap_mem(__va(cmdline_paddr), COMMAND_LINE_SIZE, true);
}

+void __init amd_enc_init(void)
+{
+ snp_secure_tsc_prepare();
+}
+
static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot)
{
unsigned long pfn = 0;
@@ -466,6 +471,7 @@ void __init sme_early_init(void)
x86_platform.guest.enc_status_change_finish = amd_enc_status_change_finish;
x86_platform.guest.enc_tlb_flush_required = amd_enc_tlb_flush_required;
x86_platform.guest.enc_cache_flush_required = amd_enc_cache_flush_required;
+ x86_platform.guest.enc_init = amd_enc_init;

/*
* AMD-SEV-ES intercepts the RDMSR to read the X2APIC ID in the
--
2.34.1

2023-11-28 13:02:48

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 07/16] x86/sev: Move and reorganize sev guest request api

For enabling Secure TSC, SEV-SNP guests need to communicate with the
AMD Security Processor early during boot. Many of the required
functions are implemented in the sev-guest driver and therefore not
available at early boot. Move the required functions and provide
API to the sev guest driver for sending guest message and vmpck
routines.

As there is no external caller for snp_issue_guest_request() anymore,
make it static and drop the prototype from sev-guest.h.

Signed-off-by: Nikunj A Dadhania <[email protected]>
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/sev-guest.h | 91 ++++-
arch/x86/include/asm/sev.h | 10 -
arch/x86/kernel/sev.c | 451 +++++++++++++++++++++-
drivers/virt/coco/sev-guest/Kconfig | 1 -
drivers/virt/coco/sev-guest/sev-guest.c | 479 +-----------------------
6 files changed, 550 insertions(+), 483 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3762f41bb092..b8f374ec5651 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1534,6 +1534,7 @@ config AMD_MEM_ENCRYPT
select ARCH_HAS_CC_PLATFORM
select X86_MEM_ENCRYPT
select UNACCEPTED_MEMORY
+ select CRYPTO_LIB_AESGCM
help
Say yes to enable support for the encryption of system memory.
This requires an AMD processor that supports Secure Memory
diff --git a/arch/x86/include/asm/sev-guest.h b/arch/x86/include/asm/sev-guest.h
index 27cc15ad6131..16bf25c14e6f 100644
--- a/arch/x86/include/asm/sev-guest.h
+++ b/arch/x86/include/asm/sev-guest.h
@@ -11,6 +11,11 @@
#define __VIRT_SEVGUEST_H__

#include <linux/types.h>
+#include <linux/miscdevice.h>
+#include <asm/sev.h>
+
+#define SNP_REQ_MAX_RETRY_DURATION (60*HZ)
+#define SNP_REQ_RETRY_DELAY (2*HZ)

#define MAX_AUTHTAG_LEN 32
#define AUTHTAG_LEN 16
@@ -58,11 +63,52 @@ struct snp_guest_msg_hdr {
u8 rsvd3[35];
} __packed;

+/* SNP Guest message request */
+struct snp_req_data {
+ unsigned long req_gpa;
+ unsigned long resp_gpa;
+};
+
struct snp_guest_msg {
struct snp_guest_msg_hdr hdr;
u8 payload[4000];
} __packed;

+struct sev_guest_platform_data {
+ /* request and response are in unencrypted memory */
+ struct snp_guest_msg *request;
+ struct snp_guest_msg *response;
+
+ struct snp_secrets_page_layout *layout;
+ struct snp_req_data input;
+};
+
+struct snp_guest_dev {
+ struct device *dev;
+ struct miscdevice misc;
+
+ /* Mutex to serialize the shared buffer access and command handling. */
+ struct mutex cmd_mutex;
+
+ void *certs_data;
+ struct aesgcm_ctx *ctx;
+
+ /*
+ * Avoid information leakage by double-buffering shared messages
+ * in fields that are in regular encrypted memory
+ */
+ struct snp_guest_msg secret_request;
+ struct snp_guest_msg secret_response;
+
+ struct sev_guest_platform_data *pdata;
+ union {
+ struct snp_report_req report;
+ struct snp_derived_key_req derived_key;
+ struct snp_ext_report_req ext_report;
+ } req;
+ unsigned int vmpck_id;
+};
+
struct snp_guest_req {
void *req_buf;
size_t req_sz;
@@ -79,6 +125,47 @@ struct snp_guest_req {
u8 msg_type;
};

-int snp_issue_guest_request(struct snp_guest_req *req, struct snp_req_data *input,
- struct snp_guest_request_ioctl *rio);
+int snp_setup_psp_messaging(struct snp_guest_dev *snp_dev);
+int snp_send_guest_request(struct snp_guest_dev *dev, struct snp_guest_req *req,
+ struct snp_guest_request_ioctl *rio);
+bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id);
+bool snp_is_vmpck_empty(unsigned int vmpck_id);
+
+static inline void free_shared_pages(void *buf, size_t sz)
+{
+ unsigned int npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
+ int ret;
+
+ if (!buf)
+ return;
+
+ ret = set_memory_encrypted((unsigned long)buf, npages);
+ if (ret) {
+ WARN_ONCE(ret, "failed to restore encryption mask (leak it)\n");
+ return;
+ }
+
+ __free_pages(virt_to_page(buf), get_order(sz));
+}
+
+static inline void *alloc_shared_pages(size_t sz)
+{
+ unsigned int npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
+ struct page *page;
+ int ret;
+
+ page = alloc_pages(GFP_KERNEL_ACCOUNT, get_order(sz));
+ if (!page)
+ return NULL;
+
+ ret = set_memory_decrypted((unsigned long)page_address(page), npages);
+ if (ret) {
+ pr_err("%s: failed to mark page shared, ret=%d\n", __func__, ret);
+ __free_pages(page, get_order(sz));
+ return NULL;
+ }
+
+ return page_address(page);
+}
+
#endif /* __VIRT_SEVGUEST_H__ */
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 78465a8c7dc6..783150458864 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -93,16 +93,6 @@ extern bool handle_vc_boot_ghcb(struct pt_regs *regs);

#define RMPADJUST_VMSA_PAGE_BIT BIT(16)

-/* SNP Guest message request */
-struct snp_req_data {
- unsigned long req_gpa;
- unsigned long resp_gpa;
-};
-
-struct sev_guest_platform_data {
- u64 secrets_gpa;
-};
-
/*
* The secrets page contains 96-bytes of reserved field that can be used by
* the guest OS. The guest OS uses the area to save the message sequence
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 479ea61f40f3..a413add2fd2c 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -24,6 +24,7 @@
#include <linux/io.h>
#include <linux/psp-sev.h>
#include <uapi/linux/sev-guest.h>
+#include <crypto/gcm.h>

#include <asm/cpu_entry_area.h>
#include <asm/stacktrace.h>
@@ -2150,8 +2151,8 @@ static int __init init_sev_config(char *str)
}
__setup("sev=", init_sev_config);

-int snp_issue_guest_request(struct snp_guest_req *req, struct snp_req_data *input,
- struct snp_guest_request_ioctl *rio)
+static int snp_issue_guest_request(struct snp_guest_req *req, struct snp_req_data *input,
+ struct snp_guest_request_ioctl *rio)
{
struct ghcb_state state;
struct es_em_ctxt ctxt;
@@ -2218,7 +2219,6 @@ int snp_issue_guest_request(struct snp_guest_req *req, struct snp_req_data *inpu

return ret;
}
-EXPORT_SYMBOL_GPL(snp_issue_guest_request);

static struct platform_device sev_guest_device = {
.name = "sev-guest",
@@ -2227,22 +2227,451 @@ static struct platform_device sev_guest_device = {

static int __init snp_init_platform_device(void)
{
- struct sev_guest_platform_data data;
-
if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
return -ENODEV;

- if (!secrets_pa)
+ if (platform_device_register(&sev_guest_device))
return -ENODEV;

- data.secrets_gpa = secrets_pa;
- if (platform_device_add_data(&sev_guest_device, &data, sizeof(data)))
+ pr_info("SNP guest platform device initialized.\n");
+ return 0;
+}
+device_initcall(snp_init_platform_device);
+
+static struct sev_guest_platform_data *platform_data;
+
+static inline u8 *snp_get_vmpck(unsigned int vmpck_id)
+{
+ if (!platform_data)
+ return NULL;
+
+ return platform_data->layout->vmpck0 + vmpck_id * VMPCK_KEY_LEN;
+}
+
+static inline u32 *snp_get_os_area_msg_seqno(unsigned int vmpck_id)
+{
+ if (!platform_data)
+ return NULL;
+
+ return &platform_data->layout->os_area.msg_seqno_0 + vmpck_id;
+}
+
+bool snp_is_vmpck_empty(unsigned int vmpck_id)
+{
+ char zero_key[VMPCK_KEY_LEN] = {0};
+ u8 *key = snp_get_vmpck(vmpck_id);
+
+ if (key)
+ return !memcmp(key, zero_key, VMPCK_KEY_LEN);
+
+ return true;
+}
+EXPORT_SYMBOL_GPL(snp_is_vmpck_empty);
+
+/*
+ * If an error is received from the host or AMD Secure Processor (ASP) there
+ * are two options. Either retry the exact same encrypted request or discontinue
+ * using the VMPCK.
+ *
+ * This is because in the current encryption scheme GHCB v2 uses AES-GCM to
+ * encrypt the requests. The IV for this scheme is the sequence number. GCM
+ * cannot tolerate IV reuse.
+ *
+ * The ASP FW v1.51 only increments the sequence numbers on a successful
+ * guest<->ASP back and forth and only accepts messages at its exact sequence
+ * number.
+ *
+ * So if the sequence number were to be reused the encryption scheme is
+ * vulnerable. If the sequence number were incremented for a fresh IV the ASP
+ * will reject the request.
+ */
+static void snp_disable_vmpck(struct snp_guest_dev *snp_dev)
+{
+ u8 *key = snp_get_vmpck(snp_dev->vmpck_id);
+
+ pr_alert("Disabling vmpck_id %u to prevent IV reuse.\n", snp_dev->vmpck_id);
+ memzero_explicit(key, VMPCK_KEY_LEN);
+}
+
+static inline u64 __snp_get_msg_seqno(struct snp_guest_dev *snp_dev)
+{
+ u32 *os_area_msg_seqno = snp_get_os_area_msg_seqno(snp_dev->vmpck_id);
+ u64 count;
+
+ if (!os_area_msg_seqno) {
+ pr_err("SNP unable to get message sequence counter\n");
+ return 0;
+ }
+
+ lockdep_assert_held(&snp_dev->cmd_mutex);
+
+ /* Read the current message sequence counter from secrets pages */
+ count = *os_area_msg_seqno;
+
+ return count + 1;
+}
+
+/* Return a non-zero on success */
+static u64 snp_get_msg_seqno(struct snp_guest_dev *snp_dev)
+{
+ u64 count = __snp_get_msg_seqno(snp_dev);
+
+ /*
+ * The message sequence counter for the SNP guest request is a 64-bit
+ * value but the version 2 of GHCB specification defines a 32-bit storage
+ * for it. If the counter exceeds the 32-bit value then return zero.
+ * The caller should check the return value, but if the caller happens to
+ * not check the value and use it, then the firmware treats zero as an
+ * invalid number and will fail the message request.
+ */
+ if (count >= UINT_MAX) {
+ pr_err("SNP request message sequence counter overflow\n");
+ return 0;
+ }
+
+ return count;
+}
+
+static void snp_inc_msg_seqno(struct snp_guest_dev *snp_dev)
+{
+ u32 *os_area_msg_seqno = snp_get_os_area_msg_seqno(snp_dev->vmpck_id);
+
+ if (!os_area_msg_seqno) {
+ pr_err("SNP unable to get message sequence counter\n");
+ return;
+ }
+
+ lockdep_assert_held(&snp_dev->cmd_mutex);
+
+ /*
+ * The counter is also incremented by the PSP, so increment it by 2
+ * and save in secrets page.
+ */
+ *os_area_msg_seqno += 2;
+}
+
+static struct aesgcm_ctx *snp_init_crypto(unsigned int vmpck_id)
+{
+ struct aesgcm_ctx *ctx;
+ u8 *key;
+
+ if (snp_is_vmpck_empty(vmpck_id)) {
+ pr_err("VM communication key VMPCK%u is null\n", vmpck_id);
+ return NULL;
+ }
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL_ACCOUNT);
+ if (!ctx)
+ return NULL;
+
+ key = snp_get_vmpck(vmpck_id);
+ if (aesgcm_expandkey(ctx, key, VMPCK_KEY_LEN, AUTHTAG_LEN)) {
+ pr_err("Crypto context initialization failed\n");
+ kfree(ctx);
+ return NULL;
+ }
+
+ return ctx;
+}
+
+int snp_setup_psp_messaging(struct snp_guest_dev *snp_dev)
+{
+ struct sev_guest_platform_data *pdata;
+ int ret;
+
+ if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) {
+ pr_err("SNP not supported\n");
+ return 0;
+ }
+
+ if (platform_data) {
+ pr_debug("SNP platform data already initialized.\n");
+ goto create_ctx;
+ }
+
+ if (!secrets_pa) {
+ pr_err("SNP secrets page not found\n");
return -ENODEV;
+ }

- if (platform_device_register(&sev_guest_device))
+ pdata = kzalloc(sizeof(struct sev_guest_platform_data), GFP_KERNEL);
+ if (!pdata) {
+ pr_err("Allocation of SNP guest platform data failed\n");
+ return -ENOMEM;
+ }
+
+ pdata->layout = (__force void *)ioremap_encrypted(secrets_pa, PAGE_SIZE);
+ if (!pdata->layout) {
+ pr_err("Failed to map SNP secrets page.\n");
+ goto e_free_pdata;
+ }
+
+ ret = -ENOMEM;
+ /* Allocate the shared page used for the request and response message. */
+ pdata->request = alloc_shared_pages(sizeof(struct snp_guest_msg));
+ if (!pdata->request)
+ goto e_unmap;
+
+ pdata->response = alloc_shared_pages(sizeof(struct snp_guest_msg));
+ if (!pdata->response)
+ goto e_free_request;
+
+ /* initial the input address for guest request */
+ pdata->input.req_gpa = __pa(pdata->request);
+ pdata->input.resp_gpa = __pa(pdata->response);
+ platform_data = pdata;
+
+create_ctx:
+ ret = -EIO;
+ snp_dev->ctx = snp_init_crypto(snp_dev->vmpck_id);
+ if (!snp_dev->ctx) {
+ pr_err("SNP crypto context initialization failed\n");
+ platform_data = NULL;
+ goto e_free_response;
+ }
+
+ snp_dev->pdata = platform_data;
+
+ return 0;
+
+e_free_response:
+ free_shared_pages(pdata->response, sizeof(struct snp_guest_msg));
+e_free_request:
+ free_shared_pages(pdata->request, sizeof(struct snp_guest_msg));
+e_unmap:
+ iounmap(pdata->layout);
+e_free_pdata:
+ kfree(pdata);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(snp_setup_psp_messaging);
+
+static int verify_and_dec_payload(struct snp_guest_dev *snp_dev, struct snp_guest_req *guest_req,
+ struct sev_guest_platform_data *pdata)
+{
+ struct snp_guest_msg *resp = &snp_dev->secret_response;
+ struct snp_guest_msg *req = &snp_dev->secret_request;
+ struct snp_guest_msg_hdr *req_hdr = &req->hdr;
+ struct snp_guest_msg_hdr *resp_hdr = &resp->hdr;
+ struct aesgcm_ctx *ctx = snp_dev->ctx;
+ u8 iv[GCM_AES_IV_SIZE] = {};
+
+ pr_debug("response [seqno %lld type %d version %d sz %d]\n",
+ resp_hdr->msg_seqno, resp_hdr->msg_type, resp_hdr->msg_version,
+ resp_hdr->msg_sz);
+
+ /* Copy response from shared memory to encrypted memory. */
+ memcpy(resp, pdata->response, sizeof(*resp));
+
+ /* Verify that the sequence counter is incremented by 1 */
+ if (unlikely(resp_hdr->msg_seqno != (req_hdr->msg_seqno + 1)))
+ return -EBADMSG;
+
+ /* Verify response message type and version number. */
+ if (resp_hdr->msg_type != (req_hdr->msg_type + 1) ||
+ resp_hdr->msg_version != req_hdr->msg_version)
+ return -EBADMSG;
+
+ /*
+ * If the message size is greater than our buffer length then return
+ * an error.
+ */
+ if (unlikely((resp_hdr->msg_sz + ctx->authsize) > guest_req->resp_sz))
+ return -EBADMSG;
+
+ /* Decrypt the payload */
+ memcpy(iv, &resp_hdr->msg_seqno, sizeof(resp_hdr->msg_seqno));
+ if (!aesgcm_decrypt(ctx, guest_req->resp_buf, resp->payload, resp_hdr->msg_sz,
+ &resp_hdr->algo, AAD_LEN, iv, resp_hdr->authtag))
+ return -EBADMSG;
+
+ return 0;
+}
+
+static int enc_payload(struct snp_guest_dev *snp_dev, u64 seqno, struct snp_guest_req *req)
+{
+ struct snp_guest_msg *msg = &snp_dev->secret_request;
+ struct snp_guest_msg_hdr *hdr = &msg->hdr;
+ struct aesgcm_ctx *ctx = snp_dev->ctx;
+ u8 iv[GCM_AES_IV_SIZE] = {};
+
+ memset(msg, 0, sizeof(*msg));
+
+ hdr->algo = SNP_AEAD_AES_256_GCM;
+ hdr->hdr_version = MSG_HDR_VER;
+ hdr->hdr_sz = sizeof(*hdr);
+ hdr->msg_type = req->msg_type;
+ hdr->msg_version = req->msg_version;
+ hdr->msg_seqno = seqno;
+ hdr->msg_vmpck = req->vmpck_id;
+ hdr->msg_sz = req->req_sz;
+
+ /* Verify the sequence number is non-zero */
+ if (!hdr->msg_seqno)
+ return -ENOSR;
+
+ pr_debug("request [seqno %lld type %d version %d sz %d]\n",
+ hdr->msg_seqno, hdr->msg_type, hdr->msg_version, hdr->msg_sz);
+
+ if (WARN_ON((req->req_sz + ctx->authsize) > sizeof(msg->payload)))
+ return -EBADMSG;
+
+ memcpy(iv, &hdr->msg_seqno, sizeof(hdr->msg_seqno));
+ aesgcm_encrypt(ctx, msg->payload, req->req_buf, req->req_sz, &hdr->algo,
+ AAD_LEN, iv, hdr->authtag);
+
+ return 0;
+}
+
+static int __handle_guest_request(struct snp_guest_dev *snp_dev, struct snp_guest_req *req,
+ struct snp_guest_request_ioctl *rio,
+ struct sev_guest_platform_data *pdata)
+{
+ unsigned long req_start = jiffies;
+ unsigned int override_npages = 0;
+ u64 override_err = 0;
+ int rc;
+
+retry_request:
+ /*
+ * Call firmware to process the request. In this function the encrypted
+ * message enters shared memory with the host. So after this call the
+ * sequence number must be incremented or the VMPCK must be deleted to
+ * prevent reuse of the IV.
+ */
+ rc = snp_issue_guest_request(req, &pdata->input, rio);
+ switch (rc) {
+ case -ENOSPC:
+ /*
+ * If the extended guest request fails due to having too
+ * small of a certificate data buffer, retry the same
+ * guest request without the extended data request in
+ * order to increment the sequence number and thus avoid
+ * IV reuse.
+ */
+ override_npages = req->data_npages;
+ req->exit_code = SVM_VMGEXIT_GUEST_REQUEST;
+
+ /*
+ * Override the error to inform callers the given extended
+ * request buffer size was too small and give the caller the
+ * required buffer size.
+ */
+ override_err = SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN);
+
+ /*
+ * If this call to the firmware succeeds, the sequence number can
+ * be incremented allowing for continued use of the VMPCK. If
+ * there is an error reflected in the return value, this value
+ * is checked further down and the result will be the deletion
+ * of the VMPCK and the error code being propagated back to the
+ * user as an ioctl() return code.
+ */
+ goto retry_request;
+
+ /*
+ * The host may return SNP_GUEST_REQ_ERR_BUSY if the request has been
+ * throttled. Retry in the driver to avoid returning and reusing the
+ * message sequence number on a different message.
+ */
+ case -EAGAIN:
+ if (jiffies - req_start > SNP_REQ_MAX_RETRY_DURATION) {
+ rc = -ETIMEDOUT;
+ break;
+ }
+ schedule_timeout_killable(SNP_REQ_RETRY_DELAY);
+ goto retry_request;
+ }
+
+ /*
+ * Increment the message sequence number. There is no harm in doing
+ * this now because decryption uses the value stored in the response
+ * structure and any failure will wipe the VMPCK, preventing further
+ * use anyway.
+ */
+ snp_inc_msg_seqno(snp_dev);
+
+ if (override_err) {
+ rio->exitinfo2 = override_err;
+
+ /*
+ * If an extended guest request was issued and the supplied certificate
+ * buffer was not large enough, a standard guest request was issued to
+ * prevent IV reuse. If the standard request was successful, return -EIO
+ * back to the caller as would have originally been returned.
+ */
+ if (!rc && override_err == SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN))
+ rc = -EIO;
+ }
+
+ if (override_npages)
+ req->data_npages = override_npages;
+
+ return rc;
+}
+
+int snp_send_guest_request(struct snp_guest_dev *snp_dev, struct snp_guest_req *req,
+ struct snp_guest_request_ioctl *rio)
+{
+ struct sev_guest_platform_data *pdata;
+ u64 seqno;
+ int rc;
+
+ if (!snp_dev || !snp_dev->pdata || !req || !rio)
return -ENODEV;

- pr_info("SNP guest platform device initialized.\n");
+ pdata = snp_dev->pdata;
+
+ /* Get message sequence and verify that its a non-zero */
+ seqno = snp_get_msg_seqno(snp_dev);
+ if (!seqno)
+ return -EIO;
+
+ /* Clear shared memory's response for the host to populate. */
+ memset(pdata->response, 0, sizeof(struct snp_guest_msg));
+
+ /* Encrypt the userspace provided payload in pdata->secret_request. */
+ rc = enc_payload(snp_dev, seqno, req);
+ if (rc)
+ return rc;
+
+ /*
+ * Write the fully encrypted request to the shared unencrypted
+ * request page.
+ */
+ memcpy(pdata->request, &snp_dev->secret_request, sizeof(snp_dev->secret_request));
+
+ rc = __handle_guest_request(snp_dev, req, rio, pdata);
+ if (rc) {
+ if (rc == -EIO &&
+ rio->exitinfo2 == SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN))
+ return rc;
+
+ pr_alert("Detected error from ASP request. rc: %d, exitinfo2: 0x%llx\n",
+ rc, rio->exitinfo2);
+ snp_disable_vmpck(snp_dev);
+ return rc;
+ }
+
+ rc = verify_and_dec_payload(snp_dev, req, pdata);
+ if (rc) {
+ pr_alert("Detected unexpected decode failure from ASP. rc: %d\n", rc);
+ snp_disable_vmpck(snp_dev);
+ return rc;
+ }
+
return 0;
}
-device_initcall(snp_init_platform_device);
+EXPORT_SYMBOL_GPL(snp_send_guest_request);
+
+bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id)
+{
+ if (WARN_ON(vmpck_id > 3))
+ return false;
+
+ dev->vmpck_id = vmpck_id;
+
+ return true;
+}
+EXPORT_SYMBOL_GPL(snp_assign_vmpck);
diff --git a/drivers/virt/coco/sev-guest/Kconfig b/drivers/virt/coco/sev-guest/Kconfig
index 0b772bd921d8..a6405ab6c2c3 100644
--- a/drivers/virt/coco/sev-guest/Kconfig
+++ b/drivers/virt/coco/sev-guest/Kconfig
@@ -2,7 +2,6 @@ config SEV_GUEST
tristate "AMD SEV Guest driver"
default m
depends on AMD_MEM_ENCRYPT
- select CRYPTO_LIB_AESGCM
select TSM_REPORTS
help
SEV-SNP firmware provides the guest a mechanism to communicate with
diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c
index 0f2134deca51..1cdf7ab04d39 100644
--- a/drivers/virt/coco/sev-guest/sev-guest.c
+++ b/drivers/virt/coco/sev-guest/sev-guest.c
@@ -31,130 +31,10 @@

#define DEVICE_NAME "sev-guest"

-#define SNP_REQ_MAX_RETRY_DURATION (60*HZ)
-#define SNP_REQ_RETRY_DELAY (2*HZ)
-
-struct snp_guest_dev {
- struct device *dev;
- struct miscdevice misc;
-
- /* Mutex to serialize the shared buffer access and command handling. */
- struct mutex cmd_mutex;
-
- void *certs_data;
- struct aesgcm_ctx *ctx;
- /* request and response are in unencrypted memory */
- struct snp_guest_msg *request, *response;
-
- /*
- * Avoid information leakage by double-buffering shared messages
- * in fields that are in regular encrypted memory.
- */
- struct snp_guest_msg secret_request, secret_response;
-
- struct snp_secrets_page_layout *layout;
- struct snp_req_data input;
- union {
- struct snp_report_req report;
- struct snp_derived_key_req derived_key;
- struct snp_ext_report_req ext_report;
- } req;
- unsigned int vmpck_id;
-};
-
static u32 vmpck_id;
module_param(vmpck_id, uint, 0444);
MODULE_PARM_DESC(vmpck_id, "The VMPCK ID to use when communicating with the PSP.");

-static inline u8 *snp_get_vmpck(struct snp_guest_dev *snp_dev)
-{
- return snp_dev->layout->vmpck0 + snp_dev->vmpck_id * VMPCK_KEY_LEN;
-}
-
-static inline u32 *snp_get_os_area_msg_seqno(struct snp_guest_dev *snp_dev)
-{
- return &snp_dev->layout->os_area.msg_seqno_0 + snp_dev->vmpck_id;
-}
-
-static bool snp_is_vmpck_empty(struct snp_guest_dev *snp_dev)
-{
- char zero_key[VMPCK_KEY_LEN] = {0};
- u8 *key = snp_get_vmpck(snp_dev);
-
- return !memcmp(key, zero_key, VMPCK_KEY_LEN);
-}
-
-/*
- * If an error is received from the host or AMD Secure Processor (ASP) there
- * are two options. Either retry the exact same encrypted request or discontinue
- * using the VMPCK.
- *
- * This is because in the current encryption scheme GHCB v2 uses AES-GCM to
- * encrypt the requests. The IV for this scheme is the sequence number. GCM
- * cannot tolerate IV reuse.
- *
- * The ASP FW v1.51 only increments the sequence numbers on a successful
- * guest<->ASP back and forth and only accepts messages at its exact sequence
- * number.
- *
- * So if the sequence number were to be reused the encryption scheme is
- * vulnerable. If the sequence number were incremented for a fresh IV the ASP
- * will reject the request.
- */
-static void snp_disable_vmpck(struct snp_guest_dev *snp_dev)
-{
- u8 *key = snp_get_vmpck(snp_dev);
-
- dev_alert(snp_dev->dev, "Disabling vmpck_id %u to prevent IV reuse.\n",
- snp_dev->vmpck_id);
- memzero_explicit(key, VMPCK_KEY_LEN);
-}
-
-static inline u64 __snp_get_msg_seqno(struct snp_guest_dev *snp_dev)
-{
- u32 *os_area_msg_seqno = snp_get_os_area_msg_seqno(snp_dev);
- u64 count;
-
- lockdep_assert_held(&snp_dev->cmd_mutex);
-
- /* Read the current message sequence counter from secrets pages */
- count = *os_area_msg_seqno;
-
- return count + 1;
-}
-
-/* Return a non-zero on success */
-static u64 snp_get_msg_seqno(struct snp_guest_dev *snp_dev)
-{
- u64 count = __snp_get_msg_seqno(snp_dev);
-
- /*
- * The message sequence counter for the SNP guest request is a 64-bit
- * value but the version 2 of GHCB specification defines a 32-bit storage
- * for it. If the counter exceeds the 32-bit value then return zero.
- * The caller should check the return value, but if the caller happens to
- * not check the value and use it, then the firmware treats zero as an
- * invalid number and will fail the message request.
- */
- if (count >= UINT_MAX) {
- dev_err(snp_dev->dev, "request message sequence counter overflow\n");
- return 0;
- }
-
- return count;
-}
-
-static void snp_inc_msg_seqno(struct snp_guest_dev *snp_dev)
-{
- u32 *os_area_msg_seqno = snp_get_os_area_msg_seqno(snp_dev);
-
- /*
- * The counter is also incremented by the PSP, so increment it by 2
- * and save in secrets page.
- */
- *os_area_msg_seqno += 2;
-}
-
static inline struct snp_guest_dev *to_snp_dev(struct file *file)
{
struct miscdevice *dev = file->private_data;
@@ -162,241 +42,6 @@ static inline struct snp_guest_dev *to_snp_dev(struct file *file)
return container_of(dev, struct snp_guest_dev, misc);
}

-static struct aesgcm_ctx *snp_init_crypto(struct snp_guest_dev *snp_dev)
-{
- struct aesgcm_ctx *ctx;
- u8 *key;
-
- if (snp_is_vmpck_empty(snp_dev)) {
- pr_err("VM communication key VMPCK%u is null\n", vmpck_id);
- return NULL;
- }
-
- ctx = kzalloc(sizeof(*ctx), GFP_KERNEL_ACCOUNT);
- if (!ctx)
- return NULL;
-
- key = snp_get_vmpck(snp_dev);
- if (aesgcm_expandkey(ctx, key, VMPCK_KEY_LEN, AUTHTAG_LEN)) {
- pr_err("Crypto context initialization failed\n");
- kfree(ctx);
- return NULL;
- }
-
- return ctx;
-}
-
-static int verify_and_dec_payload(struct snp_guest_dev *snp_dev, struct snp_guest_req *guest_req)
-{
- struct snp_guest_msg *resp = &snp_dev->secret_response;
- struct snp_guest_msg *req = &snp_dev->secret_request;
- struct snp_guest_msg_hdr *req_hdr = &req->hdr;
- struct snp_guest_msg_hdr *resp_hdr = &resp->hdr;
- struct aesgcm_ctx *ctx = snp_dev->ctx;
- u8 iv[GCM_AES_IV_SIZE] = {};
-
- pr_debug("response [seqno %lld type %d version %d sz %d]\n",
- resp_hdr->msg_seqno, resp_hdr->msg_type, resp_hdr->msg_version,
- resp_hdr->msg_sz);
-
- /* Copy response from shared memory to encrypted memory. */
- memcpy(resp, snp_dev->response, sizeof(*resp));
-
- /* Verify that the sequence counter is incremented by 1 */
- if (unlikely(resp_hdr->msg_seqno != (req_hdr->msg_seqno + 1)))
- return -EBADMSG;
-
- /* Verify response message type and version number. */
- if (resp_hdr->msg_type != (req_hdr->msg_type + 1) ||
- resp_hdr->msg_version != req_hdr->msg_version)
- return -EBADMSG;
-
- /*
- * If the message size is greater than our buffer length then return
- * an error.
- */
- if (unlikely((resp_hdr->msg_sz + ctx->authsize) > guest_req->resp_sz))
- return -EBADMSG;
-
- /* Decrypt the payload */
- memcpy(iv, &resp_hdr->msg_seqno, sizeof(resp_hdr->msg_seqno));
- if (!aesgcm_decrypt(ctx, guest_req->resp_buf, resp->payload, resp_hdr->msg_sz,
- &resp_hdr->algo, AAD_LEN, iv, resp_hdr->authtag))
- return -EBADMSG;
-
- return 0;
-}
-
-static int enc_payload(struct snp_guest_dev *snp_dev, u64 seqno, struct snp_guest_req *req)
-{
- struct snp_guest_msg *msg = &snp_dev->secret_request;
- struct snp_guest_msg_hdr *hdr = &msg->hdr;
- struct aesgcm_ctx *ctx = snp_dev->ctx;
- u8 iv[GCM_AES_IV_SIZE] = {};
-
- memset(msg, 0, sizeof(*msg));
-
- hdr->algo = SNP_AEAD_AES_256_GCM;
- hdr->hdr_version = MSG_HDR_VER;
- hdr->hdr_sz = sizeof(*hdr);
- hdr->msg_type = req->msg_type;
- hdr->msg_version = req->msg_version;
- hdr->msg_seqno = seqno;
- hdr->msg_vmpck = req->vmpck_id;
- hdr->msg_sz = req->req_sz;
-
- /* Verify the sequence number is non-zero */
- if (!hdr->msg_seqno)
- return -ENOSR;
-
- pr_debug("request [seqno %lld type %d version %d sz %d]\n",
- hdr->msg_seqno, hdr->msg_type, hdr->msg_version, hdr->msg_sz);
-
- if (WARN_ON((req->req_sz + ctx->authsize) > sizeof(msg->payload)))
- return -EBADMSG;
-
- memcpy(iv, &hdr->msg_seqno, sizeof(hdr->msg_seqno));
- aesgcm_encrypt(ctx, msg->payload, req->req_buf, req->req_sz, &hdr->algo,
- AAD_LEN, iv, hdr->authtag);
-
- return 0;
-}
-
-static int __handle_guest_request(struct snp_guest_dev *snp_dev, struct snp_guest_req *req,
- struct snp_guest_request_ioctl *rio)
-{
- unsigned long req_start = jiffies;
- unsigned int override_npages = 0;
- u64 override_err = 0;
- int rc;
-
-retry_request:
- /*
- * Call firmware to process the request. In this function the encrypted
- * message enters shared memory with the host. So after this call the
- * sequence number must be incremented or the VMPCK must be deleted to
- * prevent reuse of the IV.
- */
- rc = snp_issue_guest_request(req, &snp_dev->input, rio);
- switch (rc) {
- case -ENOSPC:
- /*
- * If the extended guest request fails due to having too
- * small of a certificate data buffer, retry the same
- * guest request without the extended data request in
- * order to increment the sequence number and thus avoid
- * IV reuse.
- */
- override_npages = req->data_npages;
- req->exit_code = SVM_VMGEXIT_GUEST_REQUEST;
-
- /*
- * Override the error to inform callers the given extended
- * request buffer size was too small and give the caller the
- * required buffer size.
- */
- override_err = SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN);
-
- /*
- * If this call to the firmware succeeds, the sequence number can
- * be incremented allowing for continued use of the VMPCK. If
- * there is an error reflected in the return value, this value
- * is checked further down and the result will be the deletion
- * of the VMPCK and the error code being propagated back to the
- * user as an ioctl() return code.
- */
- goto retry_request;
-
- /*
- * The host may return SNP_GUEST_VMM_ERR_BUSY if the request has been
- * throttled. Retry in the driver to avoid returning and reusing the
- * message sequence number on a different message.
- */
- case -EAGAIN:
- if (jiffies - req_start > SNP_REQ_MAX_RETRY_DURATION) {
- rc = -ETIMEDOUT;
- break;
- }
- schedule_timeout_killable(SNP_REQ_RETRY_DELAY);
- goto retry_request;
- }
-
- /*
- * Increment the message sequence number. There is no harm in doing
- * this now because decryption uses the value stored in the response
- * structure and any failure will wipe the VMPCK, preventing further
- * use anyway.
- */
- snp_inc_msg_seqno(snp_dev);
-
- if (override_err) {
- rio->exitinfo2 = override_err;
-
- /*
- * If an extended guest request was issued and the supplied certificate
- * buffer was not large enough, a standard guest request was issued to
- * prevent IV reuse. If the standard request was successful, return -EIO
- * back to the caller as would have originally been returned.
- */
- if (!rc && override_err == SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN))
- rc = -EIO;
- }
-
- if (override_npages)
- req->data_npages = override_npages;
-
- return rc;
-}
-
-static int snp_send_guest_request(struct snp_guest_dev *snp_dev, struct snp_guest_req *req,
- struct snp_guest_request_ioctl *rio)
-{
- u64 seqno;
- int rc;
-
- /* Get message sequence and verify that its a non-zero */
- seqno = snp_get_msg_seqno(snp_dev);
- if (!seqno)
- return -EIO;
-
- /* Clear shared memory's response for the host to populate. */
- memset(snp_dev->response, 0, sizeof(struct snp_guest_msg));
-
- /* Encrypt the userspace provided payload in snp_dev->secret_request. */
- rc = enc_payload(snp_dev, seqno, req);
- if (rc)
- return rc;
-
- /*
- * Write the fully encrypted request to the shared unencrypted
- * request page.
- */
- memcpy(snp_dev->request, &snp_dev->secret_request,
- sizeof(snp_dev->secret_request));
-
- rc = __handle_guest_request(snp_dev, req, rio);
- if (rc) {
- if (rc == -EIO &&
- rio->exitinfo2 == SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN))
- return rc;
-
- dev_alert(snp_dev->dev,
- "Detected error from ASP request. rc: %d, exitinfo2: 0x%llx\n",
- rc, rio->exitinfo2);
- snp_disable_vmpck(snp_dev);
- return rc;
- }
-
- rc = verify_and_dec_payload(snp_dev, req);
- if (rc) {
- dev_alert(snp_dev->dev, "Detected unexpected decode failure from ASP. rc: %d\n", rc);
- snp_disable_vmpck(snp_dev);
- return rc;
- }
-
- return 0;
-}
-
struct snp_req_resp {
sockptr_t req_data;
sockptr_t resp_data;
@@ -607,7 +252,7 @@ static long snp_guest_ioctl(struct file *file, unsigned int ioctl, unsigned long
mutex_lock(&snp_dev->cmd_mutex);

/* Check if the VMPCK is not empty */
- if (snp_is_vmpck_empty(snp_dev)) {
+ if (snp_is_vmpck_empty(snp_dev->vmpck_id)) {
dev_err_ratelimited(snp_dev->dev, "VMPCK is disabled\n");
mutex_unlock(&snp_dev->cmd_mutex);
return -ENOTTY;
@@ -642,58 +287,11 @@ static long snp_guest_ioctl(struct file *file, unsigned int ioctl, unsigned long
return ret;
}

-static void free_shared_pages(void *buf, size_t sz)
-{
- unsigned int npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
- int ret;
-
- if (!buf)
- return;
-
- ret = set_memory_encrypted((unsigned long)buf, npages);
- if (ret) {
- WARN_ONCE(ret, "failed to restore encryption mask (leak it)\n");
- return;
- }
-
- __free_pages(virt_to_page(buf), get_order(sz));
-}
-
-static void *alloc_shared_pages(struct device *dev, size_t sz)
-{
- unsigned int npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
- struct page *page;
- int ret;
-
- page = alloc_pages(GFP_KERNEL_ACCOUNT, get_order(sz));
- if (!page)
- return NULL;
-
- ret = set_memory_decrypted((unsigned long)page_address(page), npages);
- if (ret) {
- dev_err(dev, "failed to mark page shared, ret=%d\n", ret);
- __free_pages(page, get_order(sz));
- return NULL;
- }
-
- return page_address(page);
-}
-
static const struct file_operations snp_guest_fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = snp_guest_ioctl,
};

-bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id)
-{
- if (WARN_ON(vmpck_id > 3))
- return false;
-
- dev->vmpck_id = vmpck_id;
-
- return true;
-}
-
struct snp_msg_report_resp_hdr {
u32 status;
u32 report_size;
@@ -727,7 +325,7 @@ static int sev_report_new(struct tsm_report *report, void *data)
guard(mutex)(&snp_dev->cmd_mutex);

/* Check if the VMPCK is not empty */
- if (snp_is_vmpck_empty(snp_dev)) {
+ if (snp_is_vmpck_empty(snp_dev->vmpck_id)) {
dev_err_ratelimited(snp_dev->dev, "VMPCK is disabled\n");
return -ENOTTY;
}
@@ -820,76 +418,43 @@ static void unregister_sev_tsm(void *data)

static int __init sev_guest_probe(struct platform_device *pdev)
{
- struct snp_secrets_page_layout *layout;
- struct sev_guest_platform_data *data;
struct device *dev = &pdev->dev;
struct snp_guest_dev *snp_dev;
struct miscdevice *misc;
- void __iomem *mapping;
int ret;

if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
return -ENODEV;

- if (!dev->platform_data)
- return -ENODEV;
-
- data = (struct sev_guest_platform_data *)dev->platform_data;
- mapping = ioremap_encrypted(data->secrets_gpa, PAGE_SIZE);
- if (!mapping)
- return -ENODEV;
-
- layout = (__force void *)mapping;
-
- ret = -ENOMEM;
snp_dev = devm_kzalloc(&pdev->dev, sizeof(struct snp_guest_dev), GFP_KERNEL);
if (!snp_dev)
- goto e_unmap;
+ return -ENOMEM;

- ret = -EINVAL;
- snp_dev->layout = layout;
if (!snp_assign_vmpck(snp_dev, vmpck_id)) {
dev_err(dev, "invalid vmpck id %u\n", vmpck_id);
- goto e_unmap;
+ ret = -EINVAL;
+ goto e_free_snpdev;
}

- /* Verify that VMPCK is not zero. */
- if (snp_is_vmpck_empty(snp_dev)) {
- dev_err(dev, "vmpck id %u is null\n", vmpck_id);
- goto e_unmap;
+ if (snp_setup_psp_messaging(snp_dev)) {
+ dev_err(dev, "Unable to setup PSP messaging vmpck id %u\n", snp_dev->vmpck_id);
+ ret = -ENODEV;
+ goto e_free_snpdev;
}

mutex_init(&snp_dev->cmd_mutex);
platform_set_drvdata(pdev, snp_dev);
snp_dev->dev = dev;

- /* Allocate the shared page used for the request and response message. */
- snp_dev->request = alloc_shared_pages(dev, sizeof(struct snp_guest_msg));
- if (!snp_dev->request)
- goto e_unmap;
-
- snp_dev->response = alloc_shared_pages(dev, sizeof(struct snp_guest_msg));
- if (!snp_dev->response)
- goto e_free_request;
-
- snp_dev->certs_data = alloc_shared_pages(dev, SEV_FW_BLOB_MAX_SIZE);
+ snp_dev->certs_data = alloc_shared_pages(SEV_FW_BLOB_MAX_SIZE);
if (!snp_dev->certs_data)
- goto e_free_response;
-
- ret = -EIO;
- snp_dev->ctx = snp_init_crypto(snp_dev);
- if (!snp_dev->ctx)
- goto e_free_cert_data;
+ goto e_free_ctx;

misc = &snp_dev->misc;
misc->minor = MISC_DYNAMIC_MINOR;
misc->name = DEVICE_NAME;
misc->fops = &snp_guest_fops;

- /* initial the input address for guest request */
- snp_dev->input.req_gpa = __pa(snp_dev->request);
- snp_dev->input.resp_gpa = __pa(snp_dev->response);
-
ret = tsm_register(&sev_tsm_ops, snp_dev, &tsm_report_extra_type);
if (ret)
goto e_free_cert_data;
@@ -900,21 +465,18 @@ static int __init sev_guest_probe(struct platform_device *pdev)

ret = misc_register(misc);
if (ret)
- goto e_free_ctx;
+ goto e_free_cert_data;
+
+ dev_info(dev, "Initialized SEV guest driver (using vmpck_id %u)\n", snp_dev->vmpck_id);

- dev_info(dev, "Initialized SEV guest driver (using vmpck_id %u)\n", vmpck_id);
return 0;

-e_free_ctx:
- kfree(snp_dev->ctx);
e_free_cert_data:
free_shared_pages(snp_dev->certs_data, SEV_FW_BLOB_MAX_SIZE);
-e_free_response:
- free_shared_pages(snp_dev->response, sizeof(struct snp_guest_msg));
-e_free_request:
- free_shared_pages(snp_dev->request, sizeof(struct snp_guest_msg));
-e_unmap:
- iounmap(mapping);
+e_free_ctx:
+ kfree(snp_dev->ctx);
+e_free_snpdev:
+ kfree(snp_dev);
return ret;
}

@@ -923,10 +485,9 @@ static int __exit sev_guest_remove(struct platform_device *pdev)
struct snp_guest_dev *snp_dev = platform_get_drvdata(pdev);

free_shared_pages(snp_dev->certs_data, SEV_FW_BLOB_MAX_SIZE);
- free_shared_pages(snp_dev->response, sizeof(struct snp_guest_msg));
- free_shared_pages(snp_dev->request, sizeof(struct snp_guest_msg));
- kfree(snp_dev->ctx);
misc_deregister(&snp_dev->misc);
+ kfree(snp_dev->ctx);
+ kfree(snp_dev);

return 0;
}
--
2.34.1

2023-11-28 13:02:54

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 08/16] x86/mm: Add generic guest initialization hook

Add generic enc_init guest hook for performing any type of initialization
that is vendor specific. Generic enc_init hook can be used for early guest
feature initialization before secondary processors are up.

Signed-off-by: Nikunj A Dadhania <[email protected]>
---
arch/x86/include/asm/x86_init.h | 2 ++
arch/x86/kernel/x86_init.c | 2 ++
arch/x86/mm/mem_encrypt.c | 2 ++
3 files changed, 6 insertions(+)

diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index c878616a18b8..8095553e14a7 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -148,12 +148,14 @@ struct x86_init_acpi {
* @enc_status_change_finish Notify HV after the encryption status of a range is changed
* @enc_tlb_flush_required Returns true if a TLB flush is needed before changing page encryption status
* @enc_cache_flush_required Returns true if a cache flush is needed before changing page encryption status
+ * @enc_init Prepare and initialize encryption features
*/
struct x86_guest {
bool (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool enc);
bool (*enc_status_change_finish)(unsigned long vaddr, int npages, bool enc);
bool (*enc_tlb_flush_required)(bool enc);
bool (*enc_cache_flush_required)(void);
+ void (*enc_init)(void);
};

/**
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index a37ebd3b4773..a07985a96ca5 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -136,6 +136,7 @@ static bool enc_status_change_finish_noop(unsigned long vaddr, int npages, bool
static bool enc_tlb_flush_required_noop(bool enc) { return false; }
static bool enc_cache_flush_required_noop(void) { return false; }
static bool is_private_mmio_noop(u64 addr) {return false; }
+static void enc_init_noop(void) { }

struct x86_platform_ops x86_platform __ro_after_init = {
.calibrate_cpu = native_calibrate_cpu_early,
@@ -158,6 +159,7 @@ struct x86_platform_ops x86_platform __ro_after_init = {
.enc_status_change_finish = enc_status_change_finish_noop,
.enc_tlb_flush_required = enc_tlb_flush_required_noop,
.enc_cache_flush_required = enc_cache_flush_required_noop,
+ .enc_init = enc_init_noop,
},
};

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index c290c55b632b..d5bcd63211de 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -85,6 +85,8 @@ void __init mem_encrypt_init(void)
/* Call into SWIOTLB to update the SWIOTLB DMA buffers */
swiotlb_update_mem_attributes();

+ x86_platform.guest.enc_init();
+
print_mem_encrypt_feature_info();
}

--
2.34.1

2023-11-28 13:02:57

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 11/16] x86/sev: Change TSC MSR behavior for Secure TSC enabled guests

Secure TSC enabled guests should not write MSR_IA32_TSC(10H) register
as the subsequent TSC value reads are undefined. MSR_IA32_TSC related
accesses should not exit to the hypervisor for such guests.

Accesses to MSR_IA32_TSC needs special handling in the #VC handler for
the guests with Secure TSC enabled. Writes to MSR_IA32_TSC should be
ignored, and reads of MSR_IA32_TSC should return the result of the
RDTSC instruction.

Signed-off-by: Nikunj A Dadhania <[email protected]>
Reviewed-by: Tom Lendacky <[email protected]>
---
arch/x86/kernel/sev.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)

diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 1cb6c66d1601..602988080312 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -1266,6 +1266,30 @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
/* Is it a WRMSR? */
exit_info_1 = (ctxt->insn.opcode.bytes[1] == 0x30) ? 1 : 0;

+ /*
+ * TSC related accesses should not exit to the hypervisor when a
+ * guest is executing with SecureTSC enabled, so special handling
+ * is required for accesses of MSR_IA32_TSC:
+ *
+ * Writes: Writing to MSR_IA32_TSC can cause subsequent reads
+ * of the TSC to return undefined values, so ignore all
+ * writes.
+ * Reads: Reads of MSR_IA32_TSC should return the current TSC
+ * value, use the value returned by RDTSC.
+ */
+ if (regs->cx == MSR_IA32_TSC && cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC)) {
+ u64 tsc;
+
+ if (exit_info_1)
+ return ES_OK;
+
+ tsc = rdtsc();
+ regs->ax = UINT_MAX & tsc;
+ regs->dx = UINT_MAX & (tsc >> 32);
+
+ return ES_OK;
+ }
+
ghcb_set_rcx(ghcb, regs->cx);
if (exit_info_1) {
ghcb_set_rax(ghcb, regs->ax);
--
2.34.1

2023-11-28 13:03:05

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 13/16] x86/kvmclock: Skip kvmclock when Secure TSC is available

For AMD SNP guests having Secure TSC enabled, skip using the kvmclock.
The guest kernel will fallback and use Secure TSC based clocksource.

Signed-off-by: Nikunj A Dadhania <[email protected]>
---
arch/x86/kernel/kvmclock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index fb8f52149be9..e3de354abf74 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -288,7 +288,7 @@ void __init kvmclock_init(void)
{
u8 flags;

- if (!kvm_para_available() || !kvmclock)
+ if (!kvm_para_available() || !kvmclock || cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC))
return;

if (kvm_para_has_feature(KVM_FEATURE_CLOCKSOURCE2)) {
--
2.34.1

2023-11-28 13:03:11

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 12/16] x86/sev: Prevent RDTSC/RDTSCP interception for Secure TSC enabled guests

The hypervisor should not be intercepting RDTSC/RDTSCP when Secure TSC
is enabled. A #VC exception will be generated if the RDTSC/RDTSCP
instructions are being intercepted. If this should occur and Secure
TSC is enabled, terminate guest execution.

Signed-off-by: Nikunj A Dadhania <[email protected]>
---
arch/x86/kernel/sev-shared.c | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index ccb0915e84e1..6d9ef5897421 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -991,6 +991,16 @@ static enum es_result vc_handle_rdtsc(struct ghcb *ghcb,
bool rdtscp = (exit_code == SVM_EXIT_RDTSCP);
enum es_result ret;

+ /*
+ * RDTSC and RDTSCP should not be intercepted when Secure TSC is
+ * enabled. Terminate the SNP guest when the interception is enabled.
+ * This file is included from kernel/sev.c and boot/compressed/sev.c,
+ * use sev_status here as cc_platform_has() is not available when
+ * compiling boot/compressed/sev.c.
+ */
+ if (sev_status & MSR_AMD64_SNP_SECURE_TSC)
+ return ES_VMM_ERROR;
+
ret = sev_es_ghcb_hv_call(ghcb, ctxt, exit_code, 0, 0);
if (ret != ES_OK)
return ret;
--
2.34.1

2023-11-28 13:03:16

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 15/16] x86/cpu/amd: Do not print FW_BUG for Secure TSC

When SecureTSC is enabled and TscInvariant (bit 8) in CPUID_8000_0007_edx
is set, kernel complains with the below firmware bug:

[Firmware Bug]: TSC doesn't count with P0 frequency!

Secure TSC need not run at P0 frequency, the TSC frequency is set by the
VMM as part of the SNP_LAUNCH_START command. Avoid the check when Secure
TSC is enabled

Signed-off-by: Nikunj A Dadhania <[email protected]>
---
arch/x86/kernel/cpu/amd.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index a7eab05e5f29..4826a7393e5b 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -551,7 +551,8 @@ static void early_init_amd_mc(struct cpuinfo_x86 *c)

static void bsp_init_amd(struct cpuinfo_x86 *c)
{
- if (cpu_has(c, X86_FEATURE_CONSTANT_TSC)) {
+ if (cpu_has(c, X86_FEATURE_CONSTANT_TSC) &&
+ !cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC)) {

if (c->x86 > 0x10 ||
(c->x86 == 0x10 && c->x86_model >= 0x2)) {
--
2.34.1

2023-11-28 13:03:18

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 14/16] x86/sev: Mark Secure TSC as reliable

AMD SNP guests may have Secure TSC feature enabled. Use the Secure TSC
as the only reliable clock source in SEV-SNP guests when enabled,
bypassing unstable calibration.

Signed-off-by: Nikunj A Dadhania <[email protected]>
---
arch/x86/mm/mem_encrypt_amd.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index f561753fc94d..8614c3028adb 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -487,6 +487,9 @@ void __init sme_early_init(void)
*/
if (sev_status & MSR_AMD64_SEV_ES_ENABLED)
x86_cpuinit.parallel_bringup = false;
+
+ if (sev_status & MSR_AMD64_SNP_SECURE_TSC)
+ setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE);
}

void __init mem_encrypt_free_decrypted_mem(void)
--
2.34.1

2023-11-28 13:03:27

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 16/16] x86/sev: Enable Secure TSC for SNP guests

Now that all the required plumbing is done for enabling SNP Secure TSC
feature, add Secure TSC to snp features present list.

Set the CPUID feature bit (X86_FEATURE_SNP_SECURE_TSC) when SNP guest is
started with Secure TSC.

Signed-off-by: Nikunj A Dadhania <[email protected]>
---
arch/x86/boot/compressed/sev.c | 3 ++-
arch/x86/mm/mem_encrypt.c | 10 ++++++++--
arch/x86/mm/mem_encrypt_amd.c | 4 +++-
3 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 454acd7a2daf..2829908602e5 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -375,7 +375,8 @@ static void enforce_vmpl0(void)
* by the guest kernel. As and when a new feature is implemented in the
* guest kernel, a corresponding bit should be added to the mask.
*/
-#define SNP_FEATURES_PRESENT MSR_AMD64_SNP_DEBUG_SWAP
+#define SNP_FEATURES_PRESENT (MSR_AMD64_SNP_DEBUG_SWAP | \
+ MSR_AMD64_SNP_SECURE_TSC)

u64 snp_get_unsupported_features(u64 status)
{
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index d5bcd63211de..b0db76dc4a9d 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -70,8 +70,14 @@ static void print_mem_encrypt_feature_info(void)
pr_cont(" SEV-ES");

/* Secure Nested Paging */
- if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
- pr_cont(" SEV-SNP");
+ if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) {
+ pr_cont(" SEV-SNP\n");
+ pr_cont("SNP Features active: ");
+
+ /* SNP Secure TSC */
+ if (cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC))
+ pr_cont(" SECURE-TSC");
+ }

pr_cont("\n");
}
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index 8614c3028adb..2d1ab688c866 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -488,8 +488,10 @@ void __init sme_early_init(void)
if (sev_status & MSR_AMD64_SEV_ES_ENABLED)
x86_cpuinit.parallel_bringup = false;

- if (sev_status & MSR_AMD64_SNP_SECURE_TSC)
+ if (sev_status & MSR_AMD64_SNP_SECURE_TSC) {
+ setup_force_cpu_cap(X86_FEATURE_SNP_SECURE_TSC);
setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE);
+ }
}

void __init mem_encrypt_free_decrypted_mem(void)
--
2.34.1

2023-11-28 13:39:19

by Nikunj A. Dadhania

[permalink] [raw]
Subject: [PATCH v6 02/16] virt: sev-guest: Move mutex to SNP guest device structure

In preparation for providing a new API to the sev-guest driver for sending
an SNP guest message, move the SNP command mutex to the snp_guest_dev
structure. Drop the snp_cmd_mutex.

Signed-off-by: Nikunj A Dadhania <[email protected]>
Reviewed-by: Tom Lendacky <[email protected]>
---
drivers/virt/coco/sev-guest/sev-guest.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c
index aedc842781b6..8382fd657e67 100644
--- a/drivers/virt/coco/sev-guest/sev-guest.c
+++ b/drivers/virt/coco/sev-guest/sev-guest.c
@@ -39,6 +39,9 @@ struct snp_guest_dev {
struct device *dev;
struct miscdevice misc;

+ /* Mutex to serialize the shared buffer access and command handling. */
+ struct mutex cmd_mutex;
+
void *certs_data;
struct aesgcm_ctx *ctx;
/* request and response are in unencrypted memory */
@@ -65,9 +68,6 @@ static u32 vmpck_id;
module_param(vmpck_id, uint, 0444);
MODULE_PARM_DESC(vmpck_id, "The VMPCK ID to use when communicating with the PSP.");

-/* Mutex to serialize the shared buffer access and command handling. */
-static DEFINE_MUTEX(snp_cmd_mutex);
-
static bool is_vmpck_empty(struct snp_guest_dev *snp_dev)
{
char zero_key[VMPCK_KEY_LEN] = {0};
@@ -107,7 +107,7 @@ static inline u64 __snp_get_msg_seqno(struct snp_guest_dev *snp_dev)
{
u64 count;

- lockdep_assert_held(&snp_cmd_mutex);
+ lockdep_assert_held(&snp_dev->cmd_mutex);

/* Read the current message sequence counter from secrets pages */
count = *snp_dev->os_area_msg_seqno;
@@ -394,7 +394,7 @@ static int get_report(struct snp_guest_dev *snp_dev, struct snp_guest_request_io
struct snp_report_resp *resp;
int rc, resp_len;

- lockdep_assert_held(&snp_cmd_mutex);
+ lockdep_assert_held(&snp_dev->cmd_mutex);

if (!arg->req_data || !arg->resp_data)
return -EINVAL;
@@ -434,7 +434,7 @@ static int get_derived_key(struct snp_guest_dev *snp_dev, struct snp_guest_reque
/* Response data is 64 bytes and max authsize for GCM is 16 bytes. */
u8 buf[64 + 16];

- lockdep_assert_held(&snp_cmd_mutex);
+ lockdep_assert_held(&snp_dev->cmd_mutex);

if (!arg->req_data || !arg->resp_data)
return -EINVAL;
@@ -475,7 +475,7 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques
int ret, npages = 0, resp_len;
sockptr_t certs_address;

- lockdep_assert_held(&snp_cmd_mutex);
+ lockdep_assert_held(&snp_dev->cmd_mutex);

if (sockptr_is_null(io->req_data) || sockptr_is_null(io->resp_data))
return -EINVAL;
@@ -564,12 +564,12 @@ static long snp_guest_ioctl(struct file *file, unsigned int ioctl, unsigned long
if (!input.msg_version)
return -EINVAL;

- mutex_lock(&snp_cmd_mutex);
+ mutex_lock(&snp_dev->cmd_mutex);

/* Check if the VMPCK is not empty */
if (is_vmpck_empty(snp_dev)) {
dev_err_ratelimited(snp_dev->dev, "VMPCK is disabled\n");
- mutex_unlock(&snp_cmd_mutex);
+ mutex_unlock(&snp_dev->cmd_mutex);
return -ENOTTY;
}

@@ -594,7 +594,7 @@ static long snp_guest_ioctl(struct file *file, unsigned int ioctl, unsigned long
break;
}

- mutex_unlock(&snp_cmd_mutex);
+ mutex_unlock(&snp_dev->cmd_mutex);

if (input.exitinfo2 && copy_to_user(argp, &input, sizeof(input)))
return -EFAULT;
@@ -702,7 +702,7 @@ static int sev_report_new(struct tsm_report *report, void *data)
if (!buf)
return -ENOMEM;

- guard(mutex)(&snp_cmd_mutex);
+ guard(mutex)(&snp_dev->cmd_mutex);

/* Check if the VMPCK is not empty */
if (is_vmpck_empty(snp_dev)) {
@@ -837,6 +837,7 @@ static int __init sev_guest_probe(struct platform_device *pdev)
goto e_unmap;
}

+ mutex_init(&snp_dev->cmd_mutex);
platform_set_drvdata(pdev, snp_dev);
snp_dev->dev = dev;
snp_dev->layout = layout;
--
2.34.1

2023-11-28 22:51:44

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v6 07/16] x86/sev: Move and reorganize sev guest request api

Hi Nikunj,

kernel test robot noticed the following build warnings:

[auto build test WARNING on tip/x86/mm]
[also build test WARNING on linus/master v6.7-rc3 next-20231128]
[cannot apply to tip/x86/core kvm/queue kvm/linux-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Nikunj-A-Dadhania/virt-sev-guest-Move-mutex-to-SNP-guest-device-structure/20231128-220026
base: tip/x86/mm
patch link: https://lore.kernel.org/r/20231128125959.1810039-8-nikunj%40amd.com
patch subject: [PATCH v6 07/16] x86/sev: Move and reorganize sev guest request api
config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20231129/[email protected]/config)
compiler: clang version 16.0.4 (https://github.com/llvm/llvm-project.git ae42196bc493ffe877a7e3dff8be32035dea4d07)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231129/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> drivers/virt/coco/sev-guest/sev-guest.c:450:6: warning: variable 'ret' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
if (!snp_dev->certs_data)
^~~~~~~~~~~~~~~~~~~~
drivers/virt/coco/sev-guest/sev-guest.c:480:9: note: uninitialized use occurs here
return ret;
^~~
drivers/virt/coco/sev-guest/sev-guest.c:450:2: note: remove the 'if' if its condition is always false
if (!snp_dev->certs_data)
^~~~~~~~~~~~~~~~~~~~~~~~~
drivers/virt/coco/sev-guest/sev-guest.c:424:9: note: initialize the variable 'ret' to silence this warning
int ret;
^
= 0
1 warning generated.


vim +450 drivers/virt/coco/sev-guest/sev-guest.c

f47906782c7629 drivers/virt/coco/sev-guest/sev-guest.c Dan Williams 2023-10-10 418
2bf93ffbb97e06 drivers/virt/coco/sevguest/sevguest.c Tom Lendacky 2022-04-20 419 static int __init sev_guest_probe(struct platform_device *pdev)
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 420 {
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 421 struct device *dev = &pdev->dev;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 422 struct snp_guest_dev *snp_dev;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 423 struct miscdevice *misc;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 424 int ret;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 425
d6fd48eff7506b drivers/virt/coco/sev-guest/sev-guest.c Borislav Petkov (AMD 2023-02-15 426) if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
d6fd48eff7506b drivers/virt/coco/sev-guest/sev-guest.c Borislav Petkov (AMD 2023-02-15 427) return -ENODEV;
d6fd48eff7506b drivers/virt/coco/sev-guest/sev-guest.c Borislav Petkov (AMD 2023-02-15 428)
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 429 snp_dev = devm_kzalloc(&pdev->dev, sizeof(struct snp_guest_dev), GFP_KERNEL);
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 430 if (!snp_dev)
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 431 return -ENOMEM;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 432
523ae6405daace drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 433 if (!snp_assign_vmpck(snp_dev, vmpck_id)) {
523ae6405daace drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 434 dev_err(dev, "invalid vmpck id %u\n", vmpck_id);
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 435 ret = -EINVAL;
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 436 goto e_free_snpdev;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 437 }
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 438
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 439 if (snp_setup_psp_messaging(snp_dev)) {
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 440 dev_err(dev, "Unable to setup PSP messaging vmpck id %u\n", snp_dev->vmpck_id);
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 441 ret = -ENODEV;
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 442 goto e_free_snpdev;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 443 }
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 444
4ec0ddf1cc3c0c drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 445 mutex_init(&snp_dev->cmd_mutex);
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 446 platform_set_drvdata(pdev, snp_dev);
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 447 snp_dev->dev = dev;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 448
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 449 snp_dev->certs_data = alloc_shared_pages(SEV_FW_BLOB_MAX_SIZE);
d80b494f712317 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 @450 if (!snp_dev->certs_data)
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 451 goto e_free_ctx;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 452
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 453 misc = &snp_dev->misc;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 454 misc->minor = MISC_DYNAMIC_MINOR;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 455 misc->name = DEVICE_NAME;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 456 misc->fops = &snp_guest_fops;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 457
f47906782c7629 drivers/virt/coco/sev-guest/sev-guest.c Dan Williams 2023-10-10 458 ret = tsm_register(&sev_tsm_ops, snp_dev, &tsm_report_extra_type);
f47906782c7629 drivers/virt/coco/sev-guest/sev-guest.c Dan Williams 2023-10-10 459 if (ret)
f47906782c7629 drivers/virt/coco/sev-guest/sev-guest.c Dan Williams 2023-10-10 460 goto e_free_cert_data;
f47906782c7629 drivers/virt/coco/sev-guest/sev-guest.c Dan Williams 2023-10-10 461
f47906782c7629 drivers/virt/coco/sev-guest/sev-guest.c Dan Williams 2023-10-10 462 ret = devm_add_action_or_reset(&pdev->dev, unregister_sev_tsm, NULL);
f47906782c7629 drivers/virt/coco/sev-guest/sev-guest.c Dan Williams 2023-10-10 463 if (ret)
f47906782c7629 drivers/virt/coco/sev-guest/sev-guest.c Dan Williams 2023-10-10 464 goto e_free_cert_data;
f47906782c7629 drivers/virt/coco/sev-guest/sev-guest.c Dan Williams 2023-10-10 465
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 466 ret = misc_register(misc);
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 467 if (ret)
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 468 goto e_free_cert_data;
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 469
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 470 dev_info(dev, "Initialized SEV guest driver (using vmpck_id %u)\n", snp_dev->vmpck_id);
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 471
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 472 return 0;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 473
d80b494f712317 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 474 e_free_cert_data:
d80b494f712317 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 475 free_shared_pages(snp_dev->certs_data, SEV_FW_BLOB_MAX_SIZE);
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 476 e_free_ctx:
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 477 kfree(snp_dev->ctx);
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 478 e_free_snpdev:
81b918a5844565 drivers/virt/coco/sev-guest/sev-guest.c Nikunj A Dadhania 2023-11-28 479 kfree(snp_dev);
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 480 return ret;
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 481 }
fce96cf0443083 drivers/virt/coco/sevguest/sevguest.c Brijesh Singh 2022-03-07 482

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-11-29 02:42:20

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v6 07/16] x86/sev: Move and reorganize sev guest request api

Hi Nikunj,

kernel test robot noticed the following build warnings:

[auto build test WARNING on tip/x86/mm]
[also build test WARNING on linus/master v6.7-rc3 next-20231128]
[cannot apply to tip/x86/core kvm/queue kvm/linux-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Nikunj-A-Dadhania/virt-sev-guest-Move-mutex-to-SNP-guest-device-structure/20231128-220026
base: tip/x86/mm
patch link: https://lore.kernel.org/r/20231128125959.1810039-8-nikunj%40amd.com
patch subject: [PATCH v6 07/16] x86/sev: Move and reorganize sev guest request api
config: x86_64-rhel-8.3-rust (https://download.01.org/0day-ci/archive/20231129/[email protected]/config)
compiler: clang version 16.0.4 (https://github.com/llvm/llvm-project.git ae42196bc493ffe877a7e3dff8be32035dea4d07)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231129/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> arch/x86/kernel/sev.c:2404:6: warning: variable 'ret' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
if (!pdata->layout) {
^~~~~~~~~~~~~~
arch/x86/kernel/sev.c:2446:9: note: uninitialized use occurs here
return ret;
^~~
arch/x86/kernel/sev.c:2404:2: note: remove the 'if' if its condition is always false
if (!pdata->layout) {
^~~~~~~~~~~~~~~~~~~~~
arch/x86/kernel/sev.c:2380:9: note: initialize the variable 'ret' to silence this warning
int ret;
^
= 0
1 warning generated.


vim +2404 arch/x86/kernel/sev.c

2376
2377 int snp_setup_psp_messaging(struct snp_guest_dev *snp_dev)
2378 {
2379 struct sev_guest_platform_data *pdata;
2380 int ret;
2381
2382 if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) {
2383 pr_err("SNP not supported\n");
2384 return 0;
2385 }
2386
2387 if (platform_data) {
2388 pr_debug("SNP platform data already initialized.\n");
2389 goto create_ctx;
2390 }
2391
2392 if (!secrets_pa) {
2393 pr_err("SNP secrets page not found\n");
2394 return -ENODEV;
2395 }
2396
2397 pdata = kzalloc(sizeof(struct sev_guest_platform_data), GFP_KERNEL);
2398 if (!pdata) {
2399 pr_err("Allocation of SNP guest platform data failed\n");
2400 return -ENOMEM;
2401 }
2402
2403 pdata->layout = (__force void *)ioremap_encrypted(secrets_pa, PAGE_SIZE);
> 2404 if (!pdata->layout) {
2405 pr_err("Failed to map SNP secrets page.\n");
2406 goto e_free_pdata;
2407 }
2408
2409 ret = -ENOMEM;
2410 /* Allocate the shared page used for the request and response message. */
2411 pdata->request = alloc_shared_pages(sizeof(struct snp_guest_msg));
2412 if (!pdata->request)
2413 goto e_unmap;
2414
2415 pdata->response = alloc_shared_pages(sizeof(struct snp_guest_msg));
2416 if (!pdata->response)
2417 goto e_free_request;
2418
2419 /* initial the input address for guest request */
2420 pdata->input.req_gpa = __pa(pdata->request);
2421 pdata->input.resp_gpa = __pa(pdata->response);
2422 platform_data = pdata;
2423
2424 create_ctx:
2425 ret = -EIO;
2426 snp_dev->ctx = snp_init_crypto(snp_dev->vmpck_id);
2427 if (!snp_dev->ctx) {
2428 pr_err("SNP crypto context initialization failed\n");
2429 platform_data = NULL;
2430 goto e_free_response;
2431 }
2432
2433 snp_dev->pdata = platform_data;
2434
2435 return 0;
2436
2437 e_free_response:
2438 free_shared_pages(pdata->response, sizeof(struct snp_guest_msg));
2439 e_free_request:
2440 free_shared_pages(pdata->request, sizeof(struct snp_guest_msg));
2441 e_unmap:
2442 iounmap(pdata->layout);
2443 e_free_pdata:
2444 kfree(pdata);
2445
2446 return ret;
2447 }
2448 EXPORT_SYMBOL_GPL(snp_setup_psp_messaging);
2449

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-11-29 04:15:04

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v6 10/16] x86/sev: Add Secure TSC support for SNP guests

Hi Nikunj,

kernel test robot noticed the following build warnings:

[auto build test WARNING on tip/x86/mm]
[also build test WARNING on linus/master v6.7-rc3 next-20231128]
[cannot apply to tip/x86/core kvm/queue kvm/linux-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Nikunj-A-Dadhania/virt-sev-guest-Move-mutex-to-SNP-guest-device-structure/20231128-220026
base: tip/x86/mm
patch link: https://lore.kernel.org/r/20231128125959.1810039-11-nikunj%40amd.com
patch subject: [PATCH v6 10/16] x86/sev: Add Secure TSC support for SNP guests
config: x86_64-rhel-8.3-rust (https://download.01.org/0day-ci/archive/20231129/[email protected]/config)
compiler: clang version 16.0.4 (https://github.com/llvm/llvm-project.git ae42196bc493ffe877a7e3dff8be32035dea4d07)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231129/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> arch/x86/mm/mem_encrypt_amd.c:216:13: warning: no previous prototype for function 'amd_enc_init' [-Wmissing-prototypes]
void __init amd_enc_init(void)
^
arch/x86/mm/mem_encrypt_amd.c:216:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void __init amd_enc_init(void)
^
static
1 warning generated.


vim +/amd_enc_init +216 arch/x86/mm/mem_encrypt_amd.c

215
> 216 void __init amd_enc_init(void)
217 {
218 snp_secure_tsc_prepare();
219 }
220

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-12-05 17:14:46

by Dionna Amalie Glaze

[permalink] [raw]
Subject: Re: [PATCH v6 07/16] x86/sev: Move and reorganize sev guest request api

On Tue, Nov 28, 2023 at 5:01 AM Nikunj A Dadhania <[email protected]> wrote:
>
> For enabling Secure TSC, SEV-SNP guests need to communicate with the
> AMD Security Processor early during boot. Many of the required
> functions are implemented in the sev-guest driver and therefore not
> available at early boot. Move the required functions and provide
> API to the sev guest driver for sending guest message and vmpck
> routines.
>
> As there is no external caller for snp_issue_guest_request() anymore,
> make it static and drop the prototype from sev-guest.h.
>
> Signed-off-by: Nikunj A Dadhania <[email protected]>
> ---
> arch/x86/Kconfig | 1 +
> arch/x86/include/asm/sev-guest.h | 91 ++++-
> arch/x86/include/asm/sev.h | 10 -
> arch/x86/kernel/sev.c | 451 +++++++++++++++++++++-
> drivers/virt/coco/sev-guest/Kconfig | 1 -
> drivers/virt/coco/sev-guest/sev-guest.c | 479 +-----------------------
> 6 files changed, 550 insertions(+), 483 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 3762f41bb092..b8f374ec5651 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -1534,6 +1534,7 @@ config AMD_MEM_ENCRYPT
> select ARCH_HAS_CC_PLATFORM
> select X86_MEM_ENCRYPT
> select UNACCEPTED_MEMORY
> + select CRYPTO_LIB_AESGCM
> help
> Say yes to enable support for the encryption of system memory.
> This requires an AMD processor that supports Secure Memory
> diff --git a/arch/x86/include/asm/sev-guest.h b/arch/x86/include/asm/sev-guest.h
> index 27cc15ad6131..16bf25c14e6f 100644
> --- a/arch/x86/include/asm/sev-guest.h
> +++ b/arch/x86/include/asm/sev-guest.h
> @@ -11,6 +11,11 @@
> #define __VIRT_SEVGUEST_H__
>
> #include <linux/types.h>
> +#include <linux/miscdevice.h>
> +#include <asm/sev.h>
> +
> +#define SNP_REQ_MAX_RETRY_DURATION (60*HZ)
> +#define SNP_REQ_RETRY_DELAY (2*HZ)
>
> #define MAX_AUTHTAG_LEN 32
> #define AUTHTAG_LEN 16
> @@ -58,11 +63,52 @@ struct snp_guest_msg_hdr {
> u8 rsvd3[35];
> } __packed;
>
> +/* SNP Guest message request */
> +struct snp_req_data {
> + unsigned long req_gpa;
> + unsigned long resp_gpa;
> +};
> +
> struct snp_guest_msg {
> struct snp_guest_msg_hdr hdr;
> u8 payload[4000];
> } __packed;
>
> +struct sev_guest_platform_data {
> + /* request and response are in unencrypted memory */
> + struct snp_guest_msg *request;
> + struct snp_guest_msg *response;
> +
> + struct snp_secrets_page_layout *layout;
> + struct snp_req_data input;
> +};
> +
> +struct snp_guest_dev {
> + struct device *dev;
> + struct miscdevice misc;
> +
> + /* Mutex to serialize the shared buffer access and command handling. */
> + struct mutex cmd_mutex;
> +
> + void *certs_data;
> + struct aesgcm_ctx *ctx;
> +
> + /*
> + * Avoid information leakage by double-buffering shared messages
> + * in fields that are in regular encrypted memory
> + */
> + struct snp_guest_msg secret_request;
> + struct snp_guest_msg secret_response;
> +
> + struct sev_guest_platform_data *pdata;
> + union {
> + struct snp_report_req report;
> + struct snp_derived_key_req derived_key;
> + struct snp_ext_report_req ext_report;
> + } req;
> + unsigned int vmpck_id;
> +};
> +
> struct snp_guest_req {
> void *req_buf;
> size_t req_sz;
> @@ -79,6 +125,47 @@ struct snp_guest_req {
> u8 msg_type;
> };
>
> -int snp_issue_guest_request(struct snp_guest_req *req, struct snp_req_data *input,
> - struct snp_guest_request_ioctl *rio);
> +int snp_setup_psp_messaging(struct snp_guest_dev *snp_dev);
> +int snp_send_guest_request(struct snp_guest_dev *dev, struct snp_guest_req *req,
> + struct snp_guest_request_ioctl *rio);
> +bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id);
> +bool snp_is_vmpck_empty(unsigned int vmpck_id);
> +
> +static inline void free_shared_pages(void *buf, size_t sz)
> +{
> + unsigned int npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
> + int ret;
> +
> + if (!buf)
> + return;
> +
> + ret = set_memory_encrypted((unsigned long)buf, npages);
> + if (ret) {
> + WARN_ONCE(ret, "failed to restore encryption mask (leak it)\n");
> + return;
> + }
> +
> + __free_pages(virt_to_page(buf), get_order(sz));
> +}
> +
> +static inline void *alloc_shared_pages(size_t sz)
> +{
> + unsigned int npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
> + struct page *page;
> + int ret;
> +
> + page = alloc_pages(GFP_KERNEL_ACCOUNT, get_order(sz));
> + if (!page)
> + return NULL;
> +
> + ret = set_memory_decrypted((unsigned long)page_address(page), npages);
> + if (ret) {
> + pr_err("%s: failed to mark page shared, ret=%d\n", __func__, ret);
> + __free_pages(page, get_order(sz));
> + return NULL;
> + }
> +
> + return page_address(page);
> +}
> +
> #endif /* __VIRT_SEVGUEST_H__ */
> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
> index 78465a8c7dc6..783150458864 100644
> --- a/arch/x86/include/asm/sev.h
> +++ b/arch/x86/include/asm/sev.h
> @@ -93,16 +93,6 @@ extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
>
> #define RMPADJUST_VMSA_PAGE_BIT BIT(16)
>
> -/* SNP Guest message request */
> -struct snp_req_data {
> - unsigned long req_gpa;
> - unsigned long resp_gpa;
> -};
> -
> -struct sev_guest_platform_data {
> - u64 secrets_gpa;
> -};
> -
> /*
> * The secrets page contains 96-bytes of reserved field that can be used by
> * the guest OS. The guest OS uses the area to save the message sequence
> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
> index 479ea61f40f3..a413add2fd2c 100644
> --- a/arch/x86/kernel/sev.c
> +++ b/arch/x86/kernel/sev.c
> @@ -24,6 +24,7 @@
> #include <linux/io.h>
> #include <linux/psp-sev.h>
> #include <uapi/linux/sev-guest.h>
> +#include <crypto/gcm.h>
>
> #include <asm/cpu_entry_area.h>
> #include <asm/stacktrace.h>
> @@ -2150,8 +2151,8 @@ static int __init init_sev_config(char *str)
> }
> __setup("sev=", init_sev_config);
>
> -int snp_issue_guest_request(struct snp_guest_req *req, struct snp_req_data *input,
> - struct snp_guest_request_ioctl *rio)
> +static int snp_issue_guest_request(struct snp_guest_req *req, struct snp_req_data *input,
> + struct snp_guest_request_ioctl *rio)
> {
> struct ghcb_state state;
> struct es_em_ctxt ctxt;
> @@ -2218,7 +2219,6 @@ int snp_issue_guest_request(struct snp_guest_req *req, struct snp_req_data *inpu
>
> return ret;
> }
> -EXPORT_SYMBOL_GPL(snp_issue_guest_request);
>
> static struct platform_device sev_guest_device = {
> .name = "sev-guest",
> @@ -2227,22 +2227,451 @@ static struct platform_device sev_guest_device = {
>
> static int __init snp_init_platform_device(void)
> {
> - struct sev_guest_platform_data data;
> -
> if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
> return -ENODEV;
>
> - if (!secrets_pa)
> + if (platform_device_register(&sev_guest_device))
> return -ENODEV;
>
> - data.secrets_gpa = secrets_pa;
> - if (platform_device_add_data(&sev_guest_device, &data, sizeof(data)))
> + pr_info("SNP guest platform device initialized.\n");
> + return 0;
> +}
> +device_initcall(snp_init_platform_device);
> +
> +static struct sev_guest_platform_data *platform_data;
> +
> +static inline u8 *snp_get_vmpck(unsigned int vmpck_id)
> +{
> + if (!platform_data)
> + return NULL;
> +
> + return platform_data->layout->vmpck0 + vmpck_id * VMPCK_KEY_LEN;
> +}
> +
> +static inline u32 *snp_get_os_area_msg_seqno(unsigned int vmpck_id)
> +{
> + if (!platform_data)
> + return NULL;
> +
> + return &platform_data->layout->os_area.msg_seqno_0 + vmpck_id;
> +}
> +
> +bool snp_is_vmpck_empty(unsigned int vmpck_id)
> +{
> + char zero_key[VMPCK_KEY_LEN] = {0};
> + u8 *key = snp_get_vmpck(vmpck_id);
> +
> + if (key)
> + return !memcmp(key, zero_key, VMPCK_KEY_LEN);
> +
> + return true;
> +}
> +EXPORT_SYMBOL_GPL(snp_is_vmpck_empty);
> +
> +/*
> + * If an error is received from the host or AMD Secure Processor (ASP) there
> + * are two options. Either retry the exact same encrypted request or discontinue
> + * using the VMPCK.
> + *
> + * This is because in the current encryption scheme GHCB v2 uses AES-GCM to
> + * encrypt the requests. The IV for this scheme is the sequence number. GCM
> + * cannot tolerate IV reuse.
> + *
> + * The ASP FW v1.51 only increments the sequence numbers on a successful
> + * guest<->ASP back and forth and only accepts messages at its exact sequence
> + * number.
> + *
> + * So if the sequence number were to be reused the encryption scheme is
> + * vulnerable. If the sequence number were incremented for a fresh IV the ASP
> + * will reject the request.
> + */
> +static void snp_disable_vmpck(struct snp_guest_dev *snp_dev)
> +{
> + u8 *key = snp_get_vmpck(snp_dev->vmpck_id);
> +
> + pr_alert("Disabling vmpck_id %u to prevent IV reuse.\n", snp_dev->vmpck_id);
> + memzero_explicit(key, VMPCK_KEY_LEN);
> +}
> +
> +static inline u64 __snp_get_msg_seqno(struct snp_guest_dev *snp_dev)
> +{
> + u32 *os_area_msg_seqno = snp_get_os_area_msg_seqno(snp_dev->vmpck_id);
> + u64 count;
> +
> + if (!os_area_msg_seqno) {
> + pr_err("SNP unable to get message sequence counter\n");
> + return 0;
> + }
> +
> + lockdep_assert_held(&snp_dev->cmd_mutex);
> +
> + /* Read the current message sequence counter from secrets pages */
> + count = *os_area_msg_seqno;
> +
> + return count + 1;
> +}
> +
> +/* Return a non-zero on success */
> +static u64 snp_get_msg_seqno(struct snp_guest_dev *snp_dev)
> +{
> + u64 count = __snp_get_msg_seqno(snp_dev);
> +
> + /*
> + * The message sequence counter for the SNP guest request is a 64-bit
> + * value but the version 2 of GHCB specification defines a 32-bit storage
> + * for it. If the counter exceeds the 32-bit value then return zero.
> + * The caller should check the return value, but if the caller happens to
> + * not check the value and use it, then the firmware treats zero as an
> + * invalid number and will fail the message request.
> + */
> + if (count >= UINT_MAX) {
> + pr_err("SNP request message sequence counter overflow\n");
> + return 0;
> + }
> +
> + return count;
> +}
> +
> +static void snp_inc_msg_seqno(struct snp_guest_dev *snp_dev)
> +{
> + u32 *os_area_msg_seqno = snp_get_os_area_msg_seqno(snp_dev->vmpck_id);
> +
> + if (!os_area_msg_seqno) {
> + pr_err("SNP unable to get message sequence counter\n");
> + return;
> + }
> +
> + lockdep_assert_held(&snp_dev->cmd_mutex);
> +
> + /*
> + * The counter is also incremented by the PSP, so increment it by 2
> + * and save in secrets page.
> + */
> + *os_area_msg_seqno += 2;
> +}
> +
> +static struct aesgcm_ctx *snp_init_crypto(unsigned int vmpck_id)
> +{
> + struct aesgcm_ctx *ctx;
> + u8 *key;
> +
> + if (snp_is_vmpck_empty(vmpck_id)) {
> + pr_err("VM communication key VMPCK%u is null\n", vmpck_id);
> + return NULL;
> + }
> +
> + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL_ACCOUNT);
> + if (!ctx)
> + return NULL;
> +
> + key = snp_get_vmpck(vmpck_id);
> + if (aesgcm_expandkey(ctx, key, VMPCK_KEY_LEN, AUTHTAG_LEN)) {
> + pr_err("Crypto context initialization failed\n");
> + kfree(ctx);
> + return NULL;
> + }
> +
> + return ctx;
> +}
> +
> +int snp_setup_psp_messaging(struct snp_guest_dev *snp_dev)
> +{
> + struct sev_guest_platform_data *pdata;
> + int ret;
> +
> + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) {

Note that this may be going away in favor of an
cpu_feature_enabled(X86_FEATURE_...) check given Kirill's "[PATCH]
x86/coco, x86/sev: Use cpu_feature_enabled() to detect SEV guest
flavor"

> + pr_err("SNP not supported\n");
> + return 0;
> + }
> +
> + if (platform_data) {
> + pr_debug("SNP platform data already initialized.\n");
> + goto create_ctx;
> + }
> +
> + if (!secrets_pa) {
> + pr_err("SNP secrets page not found\n");
> return -ENODEV;
> + }
>
> - if (platform_device_register(&sev_guest_device))
> + pdata = kzalloc(sizeof(struct sev_guest_platform_data), GFP_KERNEL);
> + if (!pdata) {
> + pr_err("Allocation of SNP guest platform data failed\n");
> + return -ENOMEM;
> + }
> +
> + pdata->layout = (__force void *)ioremap_encrypted(secrets_pa, PAGE_SIZE);
> + if (!pdata->layout) {
> + pr_err("Failed to map SNP secrets page.\n");
> + goto e_free_pdata;
> + }
> +
> + ret = -ENOMEM;
> + /* Allocate the shared page used for the request and response message. */
> + pdata->request = alloc_shared_pages(sizeof(struct snp_guest_msg));
> + if (!pdata->request)
> + goto e_unmap;
> +
> + pdata->response = alloc_shared_pages(sizeof(struct snp_guest_msg));
> + if (!pdata->response)
> + goto e_free_request;
> +
> + /* initial the input address for guest request */
> + pdata->input.req_gpa = __pa(pdata->request);
> + pdata->input.resp_gpa = __pa(pdata->response);
> + platform_data = pdata;
> +
> +create_ctx:
> + ret = -EIO;
> + snp_dev->ctx = snp_init_crypto(snp_dev->vmpck_id);
> + if (!snp_dev->ctx) {
> + pr_err("SNP crypto context initialization failed\n");
> + platform_data = NULL;
> + goto e_free_response;
> + }
> +
> + snp_dev->pdata = platform_data;
> +
> + return 0;
> +
> +e_free_response:
> + free_shared_pages(pdata->response, sizeof(struct snp_guest_msg));
> +e_free_request:
> + free_shared_pages(pdata->request, sizeof(struct snp_guest_msg));
> +e_unmap:
> + iounmap(pdata->layout);
> +e_free_pdata:
> + kfree(pdata);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(snp_setup_psp_messaging);
> +
> +static int verify_and_dec_payload(struct snp_guest_dev *snp_dev, struct snp_guest_req *guest_req,
> + struct sev_guest_platform_data *pdata)
> +{
> + struct snp_guest_msg *resp = &snp_dev->secret_response;
> + struct snp_guest_msg *req = &snp_dev->secret_request;
> + struct snp_guest_msg_hdr *req_hdr = &req->hdr;
> + struct snp_guest_msg_hdr *resp_hdr = &resp->hdr;
> + struct aesgcm_ctx *ctx = snp_dev->ctx;
> + u8 iv[GCM_AES_IV_SIZE] = {};
> +
> + pr_debug("response [seqno %lld type %d version %d sz %d]\n",
> + resp_hdr->msg_seqno, resp_hdr->msg_type, resp_hdr->msg_version,
> + resp_hdr->msg_sz);
> +
> + /* Copy response from shared memory to encrypted memory. */
> + memcpy(resp, pdata->response, sizeof(*resp));
> +
> + /* Verify that the sequence counter is incremented by 1 */
> + if (unlikely(resp_hdr->msg_seqno != (req_hdr->msg_seqno + 1)))
> + return -EBADMSG;
> +
> + /* Verify response message type and version number. */
> + if (resp_hdr->msg_type != (req_hdr->msg_type + 1) ||
> + resp_hdr->msg_version != req_hdr->msg_version)
> + return -EBADMSG;
> +
> + /*
> + * If the message size is greater than our buffer length then return
> + * an error.
> + */
> + if (unlikely((resp_hdr->msg_sz + ctx->authsize) > guest_req->resp_sz))
> + return -EBADMSG;
> +
> + /* Decrypt the payload */
> + memcpy(iv, &resp_hdr->msg_seqno, sizeof(resp_hdr->msg_seqno));
> + if (!aesgcm_decrypt(ctx, guest_req->resp_buf, resp->payload, resp_hdr->msg_sz,
> + &resp_hdr->algo, AAD_LEN, iv, resp_hdr->authtag))
> + return -EBADMSG;
> +
> + return 0;
> +}
> +
> +static int enc_payload(struct snp_guest_dev *snp_dev, u64 seqno, struct snp_guest_req *req)
> +{
> + struct snp_guest_msg *msg = &snp_dev->secret_request;
> + struct snp_guest_msg_hdr *hdr = &msg->hdr;
> + struct aesgcm_ctx *ctx = snp_dev->ctx;
> + u8 iv[GCM_AES_IV_SIZE] = {};
> +
> + memset(msg, 0, sizeof(*msg));
> +
> + hdr->algo = SNP_AEAD_AES_256_GCM;
> + hdr->hdr_version = MSG_HDR_VER;
> + hdr->hdr_sz = sizeof(*hdr);
> + hdr->msg_type = req->msg_type;
> + hdr->msg_version = req->msg_version;
> + hdr->msg_seqno = seqno;
> + hdr->msg_vmpck = req->vmpck_id;
> + hdr->msg_sz = req->req_sz;
> +
> + /* Verify the sequence number is non-zero */
> + if (!hdr->msg_seqno)
> + return -ENOSR;
> +
> + pr_debug("request [seqno %lld type %d version %d sz %d]\n",
> + hdr->msg_seqno, hdr->msg_type, hdr->msg_version, hdr->msg_sz);
> +
> + if (WARN_ON((req->req_sz + ctx->authsize) > sizeof(msg->payload)))
> + return -EBADMSG;
> +
> + memcpy(iv, &hdr->msg_seqno, sizeof(hdr->msg_seqno));
> + aesgcm_encrypt(ctx, msg->payload, req->req_buf, req->req_sz, &hdr->algo,
> + AAD_LEN, iv, hdr->authtag);
> +
> + return 0;
> +}
> +
> +static int __handle_guest_request(struct snp_guest_dev *snp_dev, struct snp_guest_req *req,
> + struct snp_guest_request_ioctl *rio,
> + struct sev_guest_platform_data *pdata)
> +{
> + unsigned long req_start = jiffies;
> + unsigned int override_npages = 0;
> + u64 override_err = 0;
> + int rc;
> +
> +retry_request:
> + /*
> + * Call firmware to process the request. In this function the encrypted
> + * message enters shared memory with the host. So after this call the
> + * sequence number must be incremented or the VMPCK must be deleted to
> + * prevent reuse of the IV.
> + */
> + rc = snp_issue_guest_request(req, &pdata->input, rio);
> + switch (rc) {
> + case -ENOSPC:
> + /*
> + * If the extended guest request fails due to having too
> + * small of a certificate data buffer, retry the same
> + * guest request without the extended data request in
> + * order to increment the sequence number and thus avoid
> + * IV reuse.
> + */
> + override_npages = req->data_npages;
> + req->exit_code = SVM_VMGEXIT_GUEST_REQUEST;
> +
> + /*
> + * Override the error to inform callers the given extended
> + * request buffer size was too small and give the caller the
> + * required buffer size.
> + */
> + override_err = SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN);
> +
> + /*
> + * If this call to the firmware succeeds, the sequence number can
> + * be incremented allowing for continued use of the VMPCK. If
> + * there is an error reflected in the return value, this value
> + * is checked further down and the result will be the deletion
> + * of the VMPCK and the error code being propagated back to the
> + * user as an ioctl() return code.
> + */
> + goto retry_request;
> +
> + /*
> + * The host may return SNP_GUEST_REQ_ERR_BUSY if the request has been
> + * throttled. Retry in the driver to avoid returning and reusing the
> + * message sequence number on a different message.
> + */
> + case -EAGAIN:
> + if (jiffies - req_start > SNP_REQ_MAX_RETRY_DURATION) {
> + rc = -ETIMEDOUT;
> + break;
> + }
> + schedule_timeout_killable(SNP_REQ_RETRY_DELAY);
> + goto retry_request;
> + }
> +
> + /*
> + * Increment the message sequence number. There is no harm in doing
> + * this now because decryption uses the value stored in the response
> + * structure and any failure will wipe the VMPCK, preventing further
> + * use anyway.
> + */
> + snp_inc_msg_seqno(snp_dev);
> +
> + if (override_err) {
> + rio->exitinfo2 = override_err;
> +
> + /*
> + * If an extended guest request was issued and the supplied certificate
> + * buffer was not large enough, a standard guest request was issued to
> + * prevent IV reuse. If the standard request was successful, return -EIO
> + * back to the caller as would have originally been returned.
> + */
> + if (!rc && override_err == SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN))
> + rc = -EIO;
> + }
> +
> + if (override_npages)
> + req->data_npages = override_npages;
> +
> + return rc;
> +}
> +
> +int snp_send_guest_request(struct snp_guest_dev *snp_dev, struct snp_guest_req *req,
> + struct snp_guest_request_ioctl *rio)
> +{
> + struct sev_guest_platform_data *pdata;
> + u64 seqno;
> + int rc;
> +
> + if (!snp_dev || !snp_dev->pdata || !req || !rio)
> return -ENODEV;
>
> - pr_info("SNP guest platform device initialized.\n");
> + pdata = snp_dev->pdata;
> +
> + /* Get message sequence and verify that its a non-zero */
> + seqno = snp_get_msg_seqno(snp_dev);
> + if (!seqno)
> + return -EIO;
> +
> + /* Clear shared memory's response for the host to populate. */
> + memset(pdata->response, 0, sizeof(struct snp_guest_msg));
> +
> + /* Encrypt the userspace provided payload in pdata->secret_request. */
> + rc = enc_payload(snp_dev, seqno, req);
> + if (rc)
> + return rc;
> +
> + /*
> + * Write the fully encrypted request to the shared unencrypted
> + * request page.
> + */
> + memcpy(pdata->request, &snp_dev->secret_request, sizeof(snp_dev->secret_request));
> +
> + rc = __handle_guest_request(snp_dev, req, rio, pdata);
> + if (rc) {
> + if (rc == -EIO &&
> + rio->exitinfo2 == SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN))
> + return rc;
> +
> + pr_alert("Detected error from ASP request. rc: %d, exitinfo2: 0x%llx\n",
> + rc, rio->exitinfo2);
> + snp_disable_vmpck(snp_dev);
> + return rc;
> + }
> +
> + rc = verify_and_dec_payload(snp_dev, req, pdata);
> + if (rc) {
> + pr_alert("Detected unexpected decode failure from ASP. rc: %d\n", rc);
> + snp_disable_vmpck(snp_dev);
> + return rc;
> + }
> +
> return 0;
> }
> -device_initcall(snp_init_platform_device);
> +EXPORT_SYMBOL_GPL(snp_send_guest_request);
> +
> +bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id)
> +{
> + if (WARN_ON(vmpck_id > 3))

This constant 3 should be #define'd, I believe.

> + return false;
> +
> + dev->vmpck_id = vmpck_id;
> +
> + return true;
> +}
> +EXPORT_SYMBOL_GPL(snp_assign_vmpck);
> diff --git a/drivers/virt/coco/sev-guest/Kconfig b/drivers/virt/coco/sev-guest/Kconfig
> index 0b772bd921d8..a6405ab6c2c3 100644
> --- a/drivers/virt/coco/sev-guest/Kconfig
> +++ b/drivers/virt/coco/sev-guest/Kconfig
> @@ -2,7 +2,6 @@ config SEV_GUEST
> tristate "AMD SEV Guest driver"
> default m
> depends on AMD_MEM_ENCRYPT
> - select CRYPTO_LIB_AESGCM
> select TSM_REPORTS
> help
> SEV-SNP firmware provides the guest a mechanism to communicate with
> diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c
> index 0f2134deca51..1cdf7ab04d39 100644
> --- a/drivers/virt/coco/sev-guest/sev-guest.c
> +++ b/drivers/virt/coco/sev-guest/sev-guest.c
> @@ -31,130 +31,10 @@
>
> #define DEVICE_NAME "sev-guest"
>
> -#define SNP_REQ_MAX_RETRY_DURATION (60*HZ)
> -#define SNP_REQ_RETRY_DELAY (2*HZ)
> -
> -struct snp_guest_dev {
> - struct device *dev;
> - struct miscdevice misc;
> -
> - /* Mutex to serialize the shared buffer access and command handling. */
> - struct mutex cmd_mutex;
> -
> - void *certs_data;
> - struct aesgcm_ctx *ctx;
> - /* request and response are in unencrypted memory */
> - struct snp_guest_msg *request, *response;
> -
> - /*
> - * Avoid information leakage by double-buffering shared messages
> - * in fields that are in regular encrypted memory.
> - */
> - struct snp_guest_msg secret_request, secret_response;
> -
> - struct snp_secrets_page_layout *layout;
> - struct snp_req_data input;
> - union {
> - struct snp_report_req report;
> - struct snp_derived_key_req derived_key;
> - struct snp_ext_report_req ext_report;
> - } req;
> - unsigned int vmpck_id;
> -};
> -
> static u32 vmpck_id;
> module_param(vmpck_id, uint, 0444);
> MODULE_PARM_DESC(vmpck_id, "The VMPCK ID to use when communicating with the PSP.");
>
> -static inline u8 *snp_get_vmpck(struct snp_guest_dev *snp_dev)
> -{
> - return snp_dev->layout->vmpck0 + snp_dev->vmpck_id * VMPCK_KEY_LEN;
> -}
> -
> -static inline u32 *snp_get_os_area_msg_seqno(struct snp_guest_dev *snp_dev)
> -{
> - return &snp_dev->layout->os_area.msg_seqno_0 + snp_dev->vmpck_id;
> -}
> -
> -static bool snp_is_vmpck_empty(struct snp_guest_dev *snp_dev)
> -{
> - char zero_key[VMPCK_KEY_LEN] = {0};
> - u8 *key = snp_get_vmpck(snp_dev);
> -
> - return !memcmp(key, zero_key, VMPCK_KEY_LEN);
> -}
> -
> -/*
> - * If an error is received from the host or AMD Secure Processor (ASP) there
> - * are two options. Either retry the exact same encrypted request or discontinue
> - * using the VMPCK.
> - *
> - * This is because in the current encryption scheme GHCB v2 uses AES-GCM to
> - * encrypt the requests. The IV for this scheme is the sequence number. GCM
> - * cannot tolerate IV reuse.
> - *
> - * The ASP FW v1.51 only increments the sequence numbers on a successful
> - * guest<->ASP back and forth and only accepts messages at its exact sequence
> - * number.
> - *
> - * So if the sequence number were to be reused the encryption scheme is
> - * vulnerable. If the sequence number were incremented for a fresh IV the ASP
> - * will reject the request.
> - */
> -static void snp_disable_vmpck(struct snp_guest_dev *snp_dev)
> -{
> - u8 *key = snp_get_vmpck(snp_dev);
> -
> - dev_alert(snp_dev->dev, "Disabling vmpck_id %u to prevent IV reuse.\n",
> - snp_dev->vmpck_id);
> - memzero_explicit(key, VMPCK_KEY_LEN);
> -}
> -
> -static inline u64 __snp_get_msg_seqno(struct snp_guest_dev *snp_dev)
> -{
> - u32 *os_area_msg_seqno = snp_get_os_area_msg_seqno(snp_dev);
> - u64 count;
> -
> - lockdep_assert_held(&snp_dev->cmd_mutex);
> -
> - /* Read the current message sequence counter from secrets pages */
> - count = *os_area_msg_seqno;
> -
> - return count + 1;
> -}
> -
> -/* Return a non-zero on success */
> -static u64 snp_get_msg_seqno(struct snp_guest_dev *snp_dev)
> -{
> - u64 count = __snp_get_msg_seqno(snp_dev);
> -
> - /*
> - * The message sequence counter for the SNP guest request is a 64-bit
> - * value but the version 2 of GHCB specification defines a 32-bit storage
> - * for it. If the counter exceeds the 32-bit value then return zero.
> - * The caller should check the return value, but if the caller happens to
> - * not check the value and use it, then the firmware treats zero as an
> - * invalid number and will fail the message request.
> - */
> - if (count >= UINT_MAX) {
> - dev_err(snp_dev->dev, "request message sequence counter overflow\n");
> - return 0;
> - }
> -
> - return count;
> -}
> -
> -static void snp_inc_msg_seqno(struct snp_guest_dev *snp_dev)
> -{
> - u32 *os_area_msg_seqno = snp_get_os_area_msg_seqno(snp_dev);
> -
> - /*
> - * The counter is also incremented by the PSP, so increment it by 2
> - * and save in secrets page.
> - */
> - *os_area_msg_seqno += 2;
> -}
> -
> static inline struct snp_guest_dev *to_snp_dev(struct file *file)
> {
> struct miscdevice *dev = file->private_data;
> @@ -162,241 +42,6 @@ static inline struct snp_guest_dev *to_snp_dev(struct file *file)
> return container_of(dev, struct snp_guest_dev, misc);
> }
>
> -static struct aesgcm_ctx *snp_init_crypto(struct snp_guest_dev *snp_dev)
> -{
> - struct aesgcm_ctx *ctx;
> - u8 *key;
> -
> - if (snp_is_vmpck_empty(snp_dev)) {
> - pr_err("VM communication key VMPCK%u is null\n", vmpck_id);
> - return NULL;
> - }
> -
> - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL_ACCOUNT);
> - if (!ctx)
> - return NULL;
> -
> - key = snp_get_vmpck(snp_dev);
> - if (aesgcm_expandkey(ctx, key, VMPCK_KEY_LEN, AUTHTAG_LEN)) {
> - pr_err("Crypto context initialization failed\n");
> - kfree(ctx);
> - return NULL;
> - }
> -
> - return ctx;
> -}
> -
> -static int verify_and_dec_payload(struct snp_guest_dev *snp_dev, struct snp_guest_req *guest_req)
> -{
> - struct snp_guest_msg *resp = &snp_dev->secret_response;
> - struct snp_guest_msg *req = &snp_dev->secret_request;
> - struct snp_guest_msg_hdr *req_hdr = &req->hdr;
> - struct snp_guest_msg_hdr *resp_hdr = &resp->hdr;
> - struct aesgcm_ctx *ctx = snp_dev->ctx;
> - u8 iv[GCM_AES_IV_SIZE] = {};
> -
> - pr_debug("response [seqno %lld type %d version %d sz %d]\n",
> - resp_hdr->msg_seqno, resp_hdr->msg_type, resp_hdr->msg_version,
> - resp_hdr->msg_sz);
> -
> - /* Copy response from shared memory to encrypted memory. */
> - memcpy(resp, snp_dev->response, sizeof(*resp));
> -
> - /* Verify that the sequence counter is incremented by 1 */
> - if (unlikely(resp_hdr->msg_seqno != (req_hdr->msg_seqno + 1)))
> - return -EBADMSG;
> -
> - /* Verify response message type and version number. */
> - if (resp_hdr->msg_type != (req_hdr->msg_type + 1) ||
> - resp_hdr->msg_version != req_hdr->msg_version)
> - return -EBADMSG;
> -
> - /*
> - * If the message size is greater than our buffer length then return
> - * an error.
> - */
> - if (unlikely((resp_hdr->msg_sz + ctx->authsize) > guest_req->resp_sz))
> - return -EBADMSG;
> -
> - /* Decrypt the payload */
> - memcpy(iv, &resp_hdr->msg_seqno, sizeof(resp_hdr->msg_seqno));
> - if (!aesgcm_decrypt(ctx, guest_req->resp_buf, resp->payload, resp_hdr->msg_sz,
> - &resp_hdr->algo, AAD_LEN, iv, resp_hdr->authtag))
> - return -EBADMSG;
> -
> - return 0;
> -}
> -
> -static int enc_payload(struct snp_guest_dev *snp_dev, u64 seqno, struct snp_guest_req *req)
> -{
> - struct snp_guest_msg *msg = &snp_dev->secret_request;
> - struct snp_guest_msg_hdr *hdr = &msg->hdr;
> - struct aesgcm_ctx *ctx = snp_dev->ctx;
> - u8 iv[GCM_AES_IV_SIZE] = {};
> -
> - memset(msg, 0, sizeof(*msg));
> -
> - hdr->algo = SNP_AEAD_AES_256_GCM;
> - hdr->hdr_version = MSG_HDR_VER;
> - hdr->hdr_sz = sizeof(*hdr);
> - hdr->msg_type = req->msg_type;
> - hdr->msg_version = req->msg_version;
> - hdr->msg_seqno = seqno;
> - hdr->msg_vmpck = req->vmpck_id;
> - hdr->msg_sz = req->req_sz;
> -
> - /* Verify the sequence number is non-zero */
> - if (!hdr->msg_seqno)
> - return -ENOSR;
> -
> - pr_debug("request [seqno %lld type %d version %d sz %d]\n",
> - hdr->msg_seqno, hdr->msg_type, hdr->msg_version, hdr->msg_sz);
> -
> - if (WARN_ON((req->req_sz + ctx->authsize) > sizeof(msg->payload)))
> - return -EBADMSG;
> -
> - memcpy(iv, &hdr->msg_seqno, sizeof(hdr->msg_seqno));
> - aesgcm_encrypt(ctx, msg->payload, req->req_buf, req->req_sz, &hdr->algo,
> - AAD_LEN, iv, hdr->authtag);
> -
> - return 0;
> -}
> -
> -static int __handle_guest_request(struct snp_guest_dev *snp_dev, struct snp_guest_req *req,
> - struct snp_guest_request_ioctl *rio)
> -{
> - unsigned long req_start = jiffies;
> - unsigned int override_npages = 0;
> - u64 override_err = 0;
> - int rc;
> -
> -retry_request:
> - /*
> - * Call firmware to process the request. In this function the encrypted
> - * message enters shared memory with the host. So after this call the
> - * sequence number must be incremented or the VMPCK must be deleted to
> - * prevent reuse of the IV.
> - */
> - rc = snp_issue_guest_request(req, &snp_dev->input, rio);
> - switch (rc) {
> - case -ENOSPC:
> - /*
> - * If the extended guest request fails due to having too
> - * small of a certificate data buffer, retry the same
> - * guest request without the extended data request in
> - * order to increment the sequence number and thus avoid
> - * IV reuse.
> - */
> - override_npages = req->data_npages;
> - req->exit_code = SVM_VMGEXIT_GUEST_REQUEST;
> -
> - /*
> - * Override the error to inform callers the given extended
> - * request buffer size was too small and give the caller the
> - * required buffer size.
> - */
> - override_err = SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN);
> -
> - /*
> - * If this call to the firmware succeeds, the sequence number can
> - * be incremented allowing for continued use of the VMPCK. If
> - * there is an error reflected in the return value, this value
> - * is checked further down and the result will be the deletion
> - * of the VMPCK and the error code being propagated back to the
> - * user as an ioctl() return code.
> - */
> - goto retry_request;
> -
> - /*
> - * The host may return SNP_GUEST_VMM_ERR_BUSY if the request has been
> - * throttled. Retry in the driver to avoid returning and reusing the
> - * message sequence number on a different message.
> - */
> - case -EAGAIN:
> - if (jiffies - req_start > SNP_REQ_MAX_RETRY_DURATION) {
> - rc = -ETIMEDOUT;
> - break;
> - }
> - schedule_timeout_killable(SNP_REQ_RETRY_DELAY);
> - goto retry_request;
> - }
> -
> - /*
> - * Increment the message sequence number. There is no harm in doing
> - * this now because decryption uses the value stored in the response
> - * structure and any failure will wipe the VMPCK, preventing further
> - * use anyway.
> - */
> - snp_inc_msg_seqno(snp_dev);
> -
> - if (override_err) {
> - rio->exitinfo2 = override_err;
> -
> - /*
> - * If an extended guest request was issued and the supplied certificate
> - * buffer was not large enough, a standard guest request was issued to
> - * prevent IV reuse. If the standard request was successful, return -EIO
> - * back to the caller as would have originally been returned.
> - */
> - if (!rc && override_err == SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN))
> - rc = -EIO;
> - }
> -
> - if (override_npages)
> - req->data_npages = override_npages;
> -
> - return rc;
> -}
> -
> -static int snp_send_guest_request(struct snp_guest_dev *snp_dev, struct snp_guest_req *req,
> - struct snp_guest_request_ioctl *rio)
> -{
> - u64 seqno;
> - int rc;
> -
> - /* Get message sequence and verify that its a non-zero */
> - seqno = snp_get_msg_seqno(snp_dev);
> - if (!seqno)
> - return -EIO;
> -
> - /* Clear shared memory's response for the host to populate. */
> - memset(snp_dev->response, 0, sizeof(struct snp_guest_msg));
> -
> - /* Encrypt the userspace provided payload in snp_dev->secret_request. */
> - rc = enc_payload(snp_dev, seqno, req);
> - if (rc)
> - return rc;
> -
> - /*
> - * Write the fully encrypted request to the shared unencrypted
> - * request page.
> - */
> - memcpy(snp_dev->request, &snp_dev->secret_request,
> - sizeof(snp_dev->secret_request));
> -
> - rc = __handle_guest_request(snp_dev, req, rio);
> - if (rc) {
> - if (rc == -EIO &&
> - rio->exitinfo2 == SNP_GUEST_VMM_ERR(SNP_GUEST_VMM_ERR_INVALID_LEN))
> - return rc;
> -
> - dev_alert(snp_dev->dev,
> - "Detected error from ASP request. rc: %d, exitinfo2: 0x%llx\n",
> - rc, rio->exitinfo2);
> - snp_disable_vmpck(snp_dev);
> - return rc;
> - }
> -
> - rc = verify_and_dec_payload(snp_dev, req);
> - if (rc) {
> - dev_alert(snp_dev->dev, "Detected unexpected decode failure from ASP. rc: %d\n", rc);
> - snp_disable_vmpck(snp_dev);
> - return rc;
> - }
> -
> - return 0;
> -}
> -
> struct snp_req_resp {
> sockptr_t req_data;
> sockptr_t resp_data;
> @@ -607,7 +252,7 @@ static long snp_guest_ioctl(struct file *file, unsigned int ioctl, unsigned long
> mutex_lock(&snp_dev->cmd_mutex);
>
> /* Check if the VMPCK is not empty */
> - if (snp_is_vmpck_empty(snp_dev)) {
> + if (snp_is_vmpck_empty(snp_dev->vmpck_id)) {
> dev_err_ratelimited(snp_dev->dev, "VMPCK is disabled\n");
> mutex_unlock(&snp_dev->cmd_mutex);
> return -ENOTTY;
> @@ -642,58 +287,11 @@ static long snp_guest_ioctl(struct file *file, unsigned int ioctl, unsigned long
> return ret;
> }
>
> -static void free_shared_pages(void *buf, size_t sz)
> -{
> - unsigned int npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
> - int ret;
> -
> - if (!buf)
> - return;
> -
> - ret = set_memory_encrypted((unsigned long)buf, npages);
> - if (ret) {
> - WARN_ONCE(ret, "failed to restore encryption mask (leak it)\n");
> - return;
> - }
> -
> - __free_pages(virt_to_page(buf), get_order(sz));
> -}
> -
> -static void *alloc_shared_pages(struct device *dev, size_t sz)
> -{
> - unsigned int npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
> - struct page *page;
> - int ret;
> -
> - page = alloc_pages(GFP_KERNEL_ACCOUNT, get_order(sz));
> - if (!page)
> - return NULL;
> -
> - ret = set_memory_decrypted((unsigned long)page_address(page), npages);
> - if (ret) {
> - dev_err(dev, "failed to mark page shared, ret=%d\n", ret);
> - __free_pages(page, get_order(sz));
> - return NULL;
> - }
> -
> - return page_address(page);
> -}
> -
> static const struct file_operations snp_guest_fops = {
> .owner = THIS_MODULE,
> .unlocked_ioctl = snp_guest_ioctl,
> };
>
> -bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id)
> -{
> - if (WARN_ON(vmpck_id > 3))
> - return false;
> -
> - dev->vmpck_id = vmpck_id;
> -
> - return true;
> -}
> -
> struct snp_msg_report_resp_hdr {
> u32 status;
> u32 report_size;
> @@ -727,7 +325,7 @@ static int sev_report_new(struct tsm_report *report, void *data)
> guard(mutex)(&snp_dev->cmd_mutex);
>
> /* Check if the VMPCK is not empty */
> - if (snp_is_vmpck_empty(snp_dev)) {
> + if (snp_is_vmpck_empty(snp_dev->vmpck_id)) {
> dev_err_ratelimited(snp_dev->dev, "VMPCK is disabled\n");
> return -ENOTTY;
> }
> @@ -820,76 +418,43 @@ static void unregister_sev_tsm(void *data)
>
> static int __init sev_guest_probe(struct platform_device *pdev)
> {
> - struct snp_secrets_page_layout *layout;
> - struct sev_guest_platform_data *data;
> struct device *dev = &pdev->dev;
> struct snp_guest_dev *snp_dev;
> struct miscdevice *misc;
> - void __iomem *mapping;
> int ret;
>
> if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
> return -ENODEV;
>
> - if (!dev->platform_data)
> - return -ENODEV;
> -
> - data = (struct sev_guest_platform_data *)dev->platform_data;
> - mapping = ioremap_encrypted(data->secrets_gpa, PAGE_SIZE);
> - if (!mapping)
> - return -ENODEV;
> -
> - layout = (__force void *)mapping;
> -
> - ret = -ENOMEM;
> snp_dev = devm_kzalloc(&pdev->dev, sizeof(struct snp_guest_dev), GFP_KERNEL);
> if (!snp_dev)
> - goto e_unmap;
> + return -ENOMEM;
>
> - ret = -EINVAL;
> - snp_dev->layout = layout;
> if (!snp_assign_vmpck(snp_dev, vmpck_id)) {
> dev_err(dev, "invalid vmpck id %u\n", vmpck_id);
> - goto e_unmap;
> + ret = -EINVAL;
> + goto e_free_snpdev;
> }
>
> - /* Verify that VMPCK is not zero. */
> - if (snp_is_vmpck_empty(snp_dev)) {
> - dev_err(dev, "vmpck id %u is null\n", vmpck_id);
> - goto e_unmap;
> + if (snp_setup_psp_messaging(snp_dev)) {
> + dev_err(dev, "Unable to setup PSP messaging vmpck id %u\n", snp_dev->vmpck_id);
> + ret = -ENODEV;
> + goto e_free_snpdev;
> }
>
> mutex_init(&snp_dev->cmd_mutex);
> platform_set_drvdata(pdev, snp_dev);
> snp_dev->dev = dev;
>
> - /* Allocate the shared page used for the request and response message. */
> - snp_dev->request = alloc_shared_pages(dev, sizeof(struct snp_guest_msg));
> - if (!snp_dev->request)
> - goto e_unmap;
> -
> - snp_dev->response = alloc_shared_pages(dev, sizeof(struct snp_guest_msg));
> - if (!snp_dev->response)
> - goto e_free_request;
> -
> - snp_dev->certs_data = alloc_shared_pages(dev, SEV_FW_BLOB_MAX_SIZE);
> + snp_dev->certs_data = alloc_shared_pages(SEV_FW_BLOB_MAX_SIZE);
> if (!snp_dev->certs_data)
> - goto e_free_response;
> -
> - ret = -EIO;
> - snp_dev->ctx = snp_init_crypto(snp_dev);
> - if (!snp_dev->ctx)
> - goto e_free_cert_data;
> + goto e_free_ctx;
>
> misc = &snp_dev->misc;
> misc->minor = MISC_DYNAMIC_MINOR;
> misc->name = DEVICE_NAME;
> misc->fops = &snp_guest_fops;
>
> - /* initial the input address for guest request */
> - snp_dev->input.req_gpa = __pa(snp_dev->request);
> - snp_dev->input.resp_gpa = __pa(snp_dev->response);
> -
> ret = tsm_register(&sev_tsm_ops, snp_dev, &tsm_report_extra_type);
> if (ret)
> goto e_free_cert_data;
> @@ -900,21 +465,18 @@ static int __init sev_guest_probe(struct platform_device *pdev)
>
> ret = misc_register(misc);
> if (ret)
> - goto e_free_ctx;
> + goto e_free_cert_data;
> +
> + dev_info(dev, "Initialized SEV guest driver (using vmpck_id %u)\n", snp_dev->vmpck_id);
>
> - dev_info(dev, "Initialized SEV guest driver (using vmpck_id %u)\n", vmpck_id);
> return 0;
>
> -e_free_ctx:
> - kfree(snp_dev->ctx);
> e_free_cert_data:
> free_shared_pages(snp_dev->certs_data, SEV_FW_BLOB_MAX_SIZE);
> -e_free_response:
> - free_shared_pages(snp_dev->response, sizeof(struct snp_guest_msg));
> -e_free_request:
> - free_shared_pages(snp_dev->request, sizeof(struct snp_guest_msg));
> -e_unmap:
> - iounmap(mapping);
> +e_free_ctx:
> + kfree(snp_dev->ctx);
> +e_free_snpdev:
> + kfree(snp_dev);
> return ret;
> }
>
> @@ -923,10 +485,9 @@ static int __exit sev_guest_remove(struct platform_device *pdev)
> struct snp_guest_dev *snp_dev = platform_get_drvdata(pdev);
>
> free_shared_pages(snp_dev->certs_data, SEV_FW_BLOB_MAX_SIZE);
> - free_shared_pages(snp_dev->response, sizeof(struct snp_guest_msg));
> - free_shared_pages(snp_dev->request, sizeof(struct snp_guest_msg));
> - kfree(snp_dev->ctx);
> misc_deregister(&snp_dev->misc);
> + kfree(snp_dev->ctx);
> + kfree(snp_dev);
>
> return 0;
> }
> --
> 2.34.1
>


--
-Dionna Glaze, PhD (she/her)

2023-12-05 17:19:13

by Dionna Amalie Glaze

[permalink] [raw]
Subject: Re: [PATCH v6 12/16] x86/sev: Prevent RDTSC/RDTSCP interception for Secure TSC enabled guests

On Tue, Nov 28, 2023 at 5:02 AM Nikunj A Dadhania <[email protected]> wrote:
>
> The hypervisor should not be intercepting RDTSC/RDTSCP when Secure TSC
> is enabled. A #VC exception will be generated if the RDTSC/RDTSCP
> instructions are being intercepted. If this should occur and Secure
> TSC is enabled, terminate guest execution.
>
> Signed-off-by: Nikunj A Dadhania <[email protected]>
> ---
> arch/x86/kernel/sev-shared.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
> index ccb0915e84e1..6d9ef5897421 100644
> --- a/arch/x86/kernel/sev-shared.c
> +++ b/arch/x86/kernel/sev-shared.c
> @@ -991,6 +991,16 @@ static enum es_result vc_handle_rdtsc(struct ghcb *ghcb,
> bool rdtscp = (exit_code == SVM_EXIT_RDTSCP);
> enum es_result ret;
>
> + /*
> + * RDTSC and RDTSCP should not be intercepted when Secure TSC is
> + * enabled. Terminate the SNP guest when the interception is enabled.
> + * This file is included from kernel/sev.c and boot/compressed/sev.c,
> + * use sev_status here as cc_platform_has() is not available when
> + * compiling boot/compressed/sev.c.
> + */
> + if (sev_status & MSR_AMD64_SNP_SECURE_TSC)
> + return ES_VMM_ERROR;

Is this not a cc_platform_has situation? I don't recall how the
conversation shook out for TDX's forcing X86_FEATURE_TSC_RELIABLE
versus having a cc_attr_secure_tsc

> +
> ret = sev_es_ghcb_hv_call(ghcb, ctxt, exit_code, 0, 0);
> if (ret != ES_OK)
> return ret;
> --
> 2.34.1
>


--
-Dionna Glaze, PhD (she/her)

2023-12-06 04:24:29

by Nikunj A. Dadhania

[permalink] [raw]
Subject: Re: [PATCH v6 07/16] x86/sev: Move and reorganize sev guest request api

On 12/5/2023 10:43 PM, Dionna Amalie Glaze wrote:
> On Tue, Nov 28, 2023 at 5:01 AM Nikunj A Dadhania <[email protected]> wrote:
>>
>> +int snp_setup_psp_messaging(struct snp_guest_dev *snp_dev)
>> +{
>> + struct sev_guest_platform_data *pdata;
>> + int ret;
>> +
>> + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) {
>
> Note that this may be going away in favor of an
> cpu_feature_enabled(X86_FEATURE_...) check given Kirill's "[PATCH]
> x86/coco, x86/sev: Use cpu_feature_enabled() to detect SEV guest
> flavor"

I do not see a conclusion on that yet, so we should wait.

>> +bool snp_assign_vmpck(struct snp_guest_dev *dev, unsigned int vmpck_id)
>> +{
>> + if (WARN_ON(vmpck_id > 3))
>
> This constant 3 should be #define'd, I believe.

Sure, I am working on few changes related to mutex per vmpck that Tom had suggested offline, that will also need a #define.

Thanks
Nikunj

2023-12-06 04:37:54

by Nikunj A. Dadhania

[permalink] [raw]
Subject: Re: [PATCH v6 12/16] x86/sev: Prevent RDTSC/RDTSCP interception for Secure TSC enabled guests

On 12/5/2023 10:46 PM, Dionna Amalie Glaze wrote:
> On Tue, Nov 28, 2023 at 5:02 AM Nikunj A Dadhania <[email protected]> wrote:
>>
>> The hypervisor should not be intercepting RDTSC/RDTSCP when Secure TSC
>> is enabled. A #VC exception will be generated if the RDTSC/RDTSCP
>> instructions are being intercepted. If this should occur and Secure
>> TSC is enabled, terminate guest execution.
>>
>> Signed-off-by: Nikunj A Dadhania <[email protected]>
>> ---
>> arch/x86/kernel/sev-shared.c | 10 ++++++++++
>> 1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
>> index ccb0915e84e1..6d9ef5897421 100644
>> --- a/arch/x86/kernel/sev-shared.c
>> +++ b/arch/x86/kernel/sev-shared.c
>> @@ -991,6 +991,16 @@ static enum es_result vc_handle_rdtsc(struct ghcb *ghcb,
>> bool rdtscp = (exit_code == SVM_EXIT_RDTSCP);
>> enum es_result ret;
>>
>> + /*
>> + * RDTSC and RDTSCP should not be intercepted when Secure TSC is
>> + * enabled. Terminate the SNP guest when the interception is enabled.
>> + * This file is included from kernel/sev.c and boot/compressed/sev.c,
>> + * use sev_status here as cc_platform_has() is not available when
>> + * compiling boot/compressed/sev.c.
>> + */
>> + if (sev_status & MSR_AMD64_SNP_SECURE_TSC)
>> + return ES_VMM_ERROR;
>
> Is this not a cc_platform_has situation? I don't recall how the
> conversation shook out for TDX's forcing X86_FEATURE_TSC_RELIABLE
> versus having a cc_attr_secure_tsc

For SNP, SecureTSC is an opt-in feature. AFAIU, for TDX the feature is
turned on by default. So SNP guests need to check if the VMM has enabled
the feature before moving forward with SecureTSC initializations.

The idea was to have some generic name instead of AMD specific SecureTSC
(cc_attr_secure_tsc), and I had sought comments from Kirill [1]. After
that discussion I have added a synthetic flag for Secure TSC[2].

Regards
Nikunj

1. https://lore.kernel.org/lkml/[email protected]/
2. https://lore.kernel.org/lkml/[email protected]/

2023-12-06 17:47:13

by Peter Gonda

[permalink] [raw]
Subject: Re: [PATCH v6 00/16] Add Secure TSC support for SNP guests

On Tue, Nov 28, 2023 at 6:00 AM Nikunj A Dadhania <[email protected]> wrote:
>
> Secure TSC allows guests to securely use RDTSC/RDTSCP instructions as the
> parameters being used cannot be changed by hypervisor once the guest is
> launched. More details in the AMD64 APM Vol 2, Section "Secure TSC".
>
> During the boot-up of the secondary cpus, SecureTSC enabled guests need to
> query TSC info from AMD Security Processor. This communication channel is
> encrypted between the AMD Security Processor and the guest, the hypervisor
> is just the conduit to deliver the guest messages to the AMD Security
> Processor. Each message is protected with an AEAD (AES-256 GCM). See "SEV
> Secure Nested Paging Firmware ABI Specification" document (currently at
> https://www.amd.com/system/files/TechDocs/56860.pdf) section "TSC Info"
>
> Use a minimal GCM library to encrypt/decrypt SNP Guest messages to
> communicate with the AMD Security Processor which is available at early
> boot.
>
> SEV-guest driver has the implementation for guest and AMD Security
> Processor communication. As the TSC_INFO needs to be initialized during
> early boot before smp cpus are started, move most of the sev-guest driver
> code to kernel/sev.c and provide well defined APIs to the sev-guest driver
> to use the interface to avoid code-duplication.
>
> Patches:
> 01-08: Preparation and movement of sev-guest driver code
> 09-16: SecureTSC enablement patches.
>
> Testing SecureTSC
> -----------------
>
> SecureTSC hypervisor patches based on top of SEV-SNP Guest MEMFD series:
> https://github.com/nikunjad/linux/tree/snp-host-latest-securetsc_v5
>
> QEMU changes:
> https://github.com/nikunjad/qemu/tree/snp_securetsc_v5
>
> QEMU commandline SEV-SNP-UPM with SecureTSC:
>
> qemu-system-x86_64 -cpu EPYC-Milan-v2,+secure-tsc,+invtsc -smp 4 \
> -object memory-backend-memfd-private,id=ram1,size=1G,share=true \
> -object sev-snp-guest,id=sev0,cbitpos=51,reduced-phys-bits=1,secure-tsc=on \
> -machine q35,confidential-guest-support=sev0,memory-backend=ram1,kvm-type=snp \
> ...

Thanks Nikunj!

I was able to modify my SNP host kernel to support SecureTSC based off
of your `snp-host-latest-securetsc_v5` and use that to test this
series. Seemed to work as intended in the happy path but I didn't
spend much time trying any corner cases. Also checked the series
continues to work without SecureTSC enabled for the V.

Tested-by: Peter Gonda <[email protected]>

2023-12-06 18:46:19

by Dionna Amalie Glaze

[permalink] [raw]
Subject: Re: [PATCH v6 12/16] x86/sev: Prevent RDTSC/RDTSCP interception for Secure TSC enabled guests

> >> + if (sev_status & MSR_AMD64_SNP_SECURE_TSC)
> >> + return ES_VMM_ERROR;
> >
> > Is this not a cc_platform_has situation? I don't recall how the
> > conversation shook out for TDX's forcing X86_FEATURE_TSC_RELIABLE
> > versus having a cc_attr_secure_tsc
>
> For SNP, SecureTSC is an opt-in feature. AFAIU, for TDX the feature is
> turned on by default. So SNP guests need to check if the VMM has enabled
> the feature before moving forward with SecureTSC initializations.
>
> The idea was to have some generic name instead of AMD specific SecureTSC
> (cc_attr_secure_tsc), and I had sought comments from Kirill [1]. After
> that discussion I have added a synthetic flag for Secure TSC[2].
>

So with regards to [2], this sev_status flag check should be
cpu_has_feature(X86_FEATURE_SNP_SECURE_TSC)? I'm not sure if that's
available in early boot where this code is used, so if it isn't,
probably that's worth a comment.

--
-Dionna Glaze, PhD (she/her)

2023-12-06 22:22:02

by Dionna Amalie Glaze

[permalink] [raw]
Subject: Re: [PATCH v6 06/16] x86/sev: Cache the secrets page address

>
> +static void __init set_secrets_pa(const struct cc_blob_sev_info *cc_info)
> +{
> + if (cc_info && cc_info->secrets_phys && cc_info->secrets_len == PAGE_SIZE)
> + secrets_pa = cc_info->secrets_phys;
> +}

I know failure leads to an -ENODEV later on init_platform, but a
missing secrets page as a symptom seems like a good thing to log,
right?

> - if (!gpa)
> + if (!secrets_pa)
> return -ENODEV;
>


--
-Dionna Glaze, PhD (she/her)

2023-12-07 06:06:52

by Nikunj A. Dadhania

[permalink] [raw]
Subject: Re: [PATCH v6 06/16] x86/sev: Cache the secrets page address

On 12/7/2023 3:51 AM, Dionna Amalie Glaze wrote:
>>
>> +static void __init set_secrets_pa(const struct cc_blob_sev_info *cc_info)
>> +{
>> + if (cc_info && cc_info->secrets_phys && cc_info->secrets_len == PAGE_SIZE)
>> + secrets_pa = cc_info->secrets_phys;
>> +}
>
> I know failure leads to an -ENODEV later on init_platform, but a
> missing secrets page as a symptom seems like a good thing to log,
> right?

Added in the next patch.

+ if (!secrets_pa) {
+ pr_err("SNP secrets page not found\n");
return -ENODEV;
+ }

>
>> - if (!gpa)
>> + if (!secrets_pa)
>> return -ENODEV;
>>
>
>

Regards
Nikunj

2023-12-07 06:12:35

by Nikunj A. Dadhania

[permalink] [raw]
Subject: Re: [PATCH v6 12/16] x86/sev: Prevent RDTSC/RDTSCP interception for Secure TSC enabled guests

On 12/7/2023 12:15 AM, Dionna Amalie Glaze wrote:
>>>> + if (sev_status & MSR_AMD64_SNP_SECURE_TSC)
>>>> + return ES_VMM_ERROR;
>>>
>>> Is this not a cc_platform_has situation? I don't recall how the
>>> conversation shook out for TDX's forcing X86_FEATURE_TSC_RELIABLE
>>> versus having a cc_attr_secure_tsc
>>
>> For SNP, SecureTSC is an opt-in feature. AFAIU, for TDX the feature is
>> turned on by default. So SNP guests need to check if the VMM has enabled
>> the feature before moving forward with SecureTSC initializations.
>>
>> The idea was to have some generic name instead of AMD specific SecureTSC
>> (cc_attr_secure_tsc), and I had sought comments from Kirill [1]. After
>> that discussion I have added a synthetic flag for Secure TSC[2].
>>
>
> So with regards to [2], this sev_status flag check should be
> cpu_has_feature(X86_FEATURE_SNP_SECURE_TSC)? I'm not sure if that's
> available in early boot where this code is used, so if it isn't,
> probably that's worth a comment.

Right, I will update the comment.

Regards
Nikunj