Nitro Enclaves (NE) is a new Amazon Elastic Compute Cloud (EC2) capability
that allows customers to carve out isolated compute environments within EC2
instances [1].
For example, an application that processes sensitive data and runs in a VM,
can be separated from other applications running in the same VM. This
application then runs in a separate VM than the primary VM, namely an enclave.
An enclave runs alongside the VM that spawned it. This setup matches low latency
applications needs. The resources that are allocated for the enclave, such as
memory and CPU, are carved out of the primary VM. Each enclave is mapped to a
process running in the primary VM, that communicates with the NE driver via an
ioctl interface.
In this sense, there are two components:
1. An enclave abstraction process - a user space process running in the primary
VM guest that uses the provided ioctl interface of the NE driver to spawn an
enclave VM (that's 2 below).
How does all gets to an enclave VM running on the host?
There is a NE emulated PCI device exposed to the primary VM. The driver for this
new PCI device is included in the current patch series.
The ioctl logic is mapped to PCI device commands e.g. the NE_START_ENCLAVE ioctl
maps to an enclave start PCI command or the KVM_SET_USER_MEMORY_REGION maps to
an add memory PCI command. The PCI device commands are then translated into
actions taken on the hypervisor side; that's the Nitro hypervisor running on the
host where the primary VM is running. The Nitro hypervisor is based on core KVM
technology.
2. The enclave itself - a VM running on the same host as the primary VM that
spawned it. Memory and CPUs are carved out of the primary VM and are dedicated
for the enclave VM. An enclave does not have persistent storage attached.
An enclave communicates with the primary VM via a local communication channel,
using virtio-vsock [2]. The primary VM has virtio-pci vsock emulated device,
while the enclave VM has a virtio-mmio vsock emulated device. The vsock device
uses eventfd for signaling. The enclave VM sees the usual interfaces - local
APIC and IOAPIC - to get interrupts from virtio-vsock device. The virtio-mmio
device is placed in memory below the typical 4 GiB.
The application that runs in the enclave needs to be packaged in an enclave
image together with the OS ( e.g. kernel, ramdisk, init ) that will run in the
enclave VM. The enclave VM has its own kernel and follows the standard Linux
boot protocol.
The kernel bzImage, the kernel command line, the ramdisk(s) are part of the
Enclave Image Format (EIF); plus an EIF header including metadata such as magic
number, eif version, image size and CRC. We've also considered FIT image format
[3] as an option for the enclave image.
Hash values are computed for the entire enclave image (EIF), the kernel and
ramdisk(s). That's used, for example, to check that the enclave image that is
loaded in the enclave VM is the one that was intended to be run.
These crypto measurements are included in a signed attestation document
generated by the Nitro Hypervisor and further used to prove the identity of the
enclave; KMS is an example of service that NE is integrated with and that checks
the attestation doc.
The enclave image (EIF) is loaded in the enclave memory at offset 8 MiB. The
init process in the enclave connects to the vsock CID of the primary VM and a
predefined port - 9000 - to send a heartbeat value - 0xb7. This mechanism is
used to check in the primary VM that the enclave has booted.
If the enclave VM crashes or gracefully exits, an interrupt event is received by
the NE driver. This event is sent further to the user space enclave process
running in the primary VM via a poll notification mechanism. Then the user space
enclave process can exit.
The following patch series covers the NE driver for enclave lifetime management.
It provides an ioctl interface to the user space and includes the NE PCI device
driver that is the means of communication with the hypervisor running on the
host where the primary VM and the enclave are launched.
The proposed solution is following the KVM model and uses KVM ioctls to be able
to create and set resources for enclaves. Additional NE ioctl commands, besides
the ones provided by KVM, are used to start an enclave and get memory offset for
in-memory enclave image loading.
Thank you.
Andra
[1] https://aws.amazon.com/ec2/nitro/nitro-enclaves/
[2] http://man7.org/linux/man-pages/man7/vsock.7.html
[3] https://github.com/u-boot/u-boot/tree/master/doc/uImage.FIT
---
Patch Series Changelog
The patch series is built on top of v5.7-rc7.
v2 -> v3
* Rebase on top of v5.7-rc7.
* Add changelog to each patch in the series.
* Remove "ratelimited" from the logs that are not in the ioctl call paths.
* Update static calls sanity checks.
* Remove file ops that do nothing for now.
* Remove GPL additional wording as SPDX-License-Identifier is already in place.
* v2: https://lore.kernel.org/lkml/[email protected]/
v1 -> v2
* Rebase on top of v5.7-rc6.
* Adapt codebase based on feedback from v1.
* Update ioctl number definition - major and minor.
* Add sample / documentation for the ioctl interface basic flow usage.
* Update cover letter to include more context on the NE overall.
* Add fix for the enclave / vcpu fd creation error cleanup path.
* Add fix reported by kbuild test robot <[email protected]>.
* v1: https://lore.kernel.org/lkml/[email protected]/
---
Andra Paraschiv (18):
nitro_enclaves: Add ioctl interface definition
nitro_enclaves: Define the PCI device interface
nitro_enclaves: Define enclave info for internal bookkeeping
nitro_enclaves: Init PCI device driver
nitro_enclaves: Handle PCI device command requests
nitro_enclaves: Handle out-of-band PCI device events
nitro_enclaves: Init misc device providing the ioctl interface
nitro_enclaves: Add logic for enclave vm creation
nitro_enclaves: Add logic for enclave vcpu creation
nitro_enclaves: Add logic for enclave image load metadata
nitro_enclaves: Add logic for enclave memory region set
nitro_enclaves: Add logic for enclave start
nitro_enclaves: Add logic for enclave termination
nitro_enclaves: Add Kconfig for the Nitro Enclaves driver
nitro_enclaves: Add Makefile for the Nitro Enclaves driver
nitro_enclaves: Add sample for ioctl interface usage
nitro_enclaves: Add overview documentation
MAINTAINERS: Add entry for the Nitro Enclaves driver
Documentation/nitro_enclaves/ne_overview.txt | 86 ++
.../userspace-api/ioctl/ioctl-number.rst | 5 +-
MAINTAINERS | 13 +
drivers/virt/Kconfig | 2 +
drivers/virt/Makefile | 2 +
drivers/virt/nitro_enclaves/Kconfig | 16 +
drivers/virt/nitro_enclaves/Makefile | 11 +
drivers/virt/nitro_enclaves/ne_misc_dev.c | 1052 +++++++++++++++++
drivers/virt/nitro_enclaves/ne_misc_dev.h | 109 ++
drivers/virt/nitro_enclaves/ne_pci_dev.c | 606 ++++++++++
drivers/virt/nitro_enclaves/ne_pci_dev.h | 254 ++++
include/linux/nitro_enclaves.h | 11 +
include/uapi/linux/nitro_enclaves.h | 65 +
samples/nitro_enclaves/.gitignore | 2 +
samples/nitro_enclaves/Makefile | 16 +
samples/nitro_enclaves/ne_ioctl_sample.c | 490 ++++++++
16 files changed, 2739 insertions(+), 1 deletion(-)
create mode 100644 Documentation/nitro_enclaves/ne_overview.txt
create mode 100644 drivers/virt/nitro_enclaves/Kconfig
create mode 100644 drivers/virt/nitro_enclaves/Makefile
create mode 100644 drivers/virt/nitro_enclaves/ne_misc_dev.c
create mode 100644 drivers/virt/nitro_enclaves/ne_misc_dev.h
create mode 100644 drivers/virt/nitro_enclaves/ne_pci_dev.c
create mode 100644 drivers/virt/nitro_enclaves/ne_pci_dev.h
create mode 100644 include/linux/nitro_enclaves.h
create mode 100644 include/uapi/linux/nitro_enclaves.h
create mode 100644 samples/nitro_enclaves/.gitignore
create mode 100644 samples/nitro_enclaves/Makefile
create mode 100644 samples/nitro_enclaves/ne_ioctl_sample.c
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
The Nitro Enclaves (NE) driver communicates with a new PCI device, that
is exposed to a virtual machine (VM) and handles commands meant for
handling enclaves lifetime e.g. creation, termination, setting memory
regions. The communication with the PCI device is handled using a MMIO
space and MSI-X interrupts.
This device communicates with the hypervisor on the host, where the VM
that spawned the enclave itself run, e.g. to launch a VM that is used
for the enclave.
Define the MMIO space of the PCI device, the commands that are
provided by this device. Add an internal data structure used as private
data for the PCI device driver and the functions for the PCI device init
/ uninit and command requests handling.
Signed-off-by: Alexandru-Catalin Vasile <[email protected]>
Signed-off-by: Alexandru Ciobotaru <[email protected]>
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the GPL additional wording as SPDX-License-Identifier is already in
place.
v1 -> v2
* Update path naming to drivers/virt/nitro_enclaves.
* Update NE_ENABLE_OFF / NE_ENABLE_ON defines.
---
drivers/virt/nitro_enclaves/ne_pci_dev.h | 254 +++++++++++++++++++++++
1 file changed, 254 insertions(+)
create mode 100644 drivers/virt/nitro_enclaves/ne_pci_dev.h
diff --git a/drivers/virt/nitro_enclaves/ne_pci_dev.h b/drivers/virt/nitro_enclaves/ne_pci_dev.h
new file mode 100644
index 000000000000..eafef6524d97
--- /dev/null
+++ b/drivers/virt/nitro_enclaves/ne_pci_dev.h
@@ -0,0 +1,254 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+ */
+
+#ifndef _NE_PCI_DEV_H_
+#define _NE_PCI_DEV_H_
+
+#include <linux/atomic.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/pci.h>
+#include <linux/pci_ids.h>
+#include <linux/wait.h>
+
+/* Nitro Enclaves (NE) PCI device identifier */
+
+#define PCI_DEVICE_ID_NE (0xe4c1)
+#define PCI_BAR_NE (0x03)
+
+/* Device registers */
+
+/**
+ * (1 byte) Register to notify the device that the driver is using it
+ * (Read/Write).
+ */
+#define NE_ENABLE (0x0000)
+#define NE_ENABLE_OFF (0x00)
+#define NE_ENABLE_ON (0x01)
+
+/* (2 bytes) Register to select the device run-time version (Read/Write). */
+#define NE_VERSION (0x0002)
+#define NE_VERSION_MAX (0x0001)
+
+/**
+ * (4 bytes) Register to notify the device what command was requested
+ * (Write-Only).
+ */
+#define NE_COMMAND (0x0004)
+
+/**
+ * (4 bytes) Register to notify the driver that a reply or a device event
+ * is available (Read-Only):
+ * - Lower half - command reply counter
+ * - Higher half - out-of-band device event counter
+ */
+#define NE_EVTCNT (0x000c)
+#define NE_EVTCNT_REPLY_SHIFT (0)
+#define NE_EVTCNT_REPLY_MASK (0x0000ffff)
+#define NE_EVTCNT_REPLY(cnt) (((cnt) & NE_EVTCNT_REPLY_MASK) >> \
+ NE_EVTCNT_REPLY_SHIFT)
+#define NE_EVTCNT_EVENT_SHIFT (16)
+#define NE_EVTCNT_EVENT_MASK (0xffff0000)
+#define NE_EVTCNT_EVENT(cnt) (((cnt) & NE_EVTCNT_EVENT_MASK) >> \
+ NE_EVTCNT_EVENT_SHIFT)
+
+/* (240 bytes) Buffer for sending the command request payload (Read/Write). */
+#define NE_SEND_DATA (0x0010)
+
+/* (240 bytes) Buffer for receiving the command reply payload (Read-Only). */
+#define NE_RECV_DATA (0x0100)
+
+/* Device MMIO buffer sizes */
+
+/* 240 bytes for send / recv buffer. */
+#define NE_SEND_DATA_SIZE (240)
+#define NE_RECV_DATA_SIZE (240)
+
+/* MSI-X interrupt vectors */
+
+/* MSI-X vector used for command reply notification. */
+#define NE_VEC_REPLY (0)
+
+/* MSI-X vector used for out-of-band events e.g. enclave crash. */
+#define NE_VEC_EVENT (1)
+
+/* Device command types. */
+enum ne_pci_dev_cmd_type {
+ INVALID_CMD = 0,
+ ENCLAVE_START = 1,
+ ENCLAVE_GET_SLOT = 2,
+ ENCLAVE_STOP = 3,
+ SLOT_ALLOC = 4,
+ SLOT_FREE = 5,
+ SLOT_ADD_MEM = 6,
+ SLOT_ADD_VCPU = 7,
+ SLOT_COUNT = 8,
+ NEXT_SLOT = 9,
+ SLOT_INFO = 10,
+ SLOT_ADD_BULK_VCPUS = 11,
+ MAX_CMD,
+};
+
+/* Device commands - payload structure for requests and replies. */
+
+struct enclave_start_req {
+ /* Slot unique id mapped to the enclave to start. */
+ u64 slot_uid;
+
+ /**
+ * Context ID (CID) for the enclave vsock device.
+ * If 0, CID is autogenerated.
+ */
+ u64 enclave_cid;
+
+ /* Flags for the enclave to start with (e.g. debug mode). */
+ u64 flags;
+} __attribute__ ((__packed__));
+
+struct enclave_get_slot_req {
+ /* Context ID (CID) for the enclave vsock device. */
+ u64 enclave_cid;
+} __attribute__ ((__packed__));
+
+struct enclave_stop_req {
+ /* Slot unique id mapped to the enclave to stop. */
+ u64 slot_uid;
+} __attribute__ ((__packed__));
+
+struct slot_alloc_req {
+ /* In order to avoid weird sizeof edge cases. */
+ u8 unused;
+} __attribute__ ((__packed__));
+
+struct slot_free_req {
+ /* Slot unique id mapped to the slot to free. */
+ u64 slot_uid;
+} __attribute__ ((__packed__));
+
+struct slot_add_mem_req {
+ /* Slot unique id mapped to the slot to add the memory region to. */
+ u64 slot_uid;
+
+ /* Physical address of the memory region to add to the slot. */
+ u64 paddr;
+
+ /* Memory size, in bytes, of the memory region to add to the slot. */
+ u64 size;
+} __attribute__ ((__packed__));
+
+struct slot_add_vcpu_req {
+ /* Slot unique id mapped to the slot to add the vCPU to. */
+ u64 slot_uid;
+
+ /* vCPU ID of the CPU to add to the enclave. */
+ u32 vcpu_id;
+} __attribute__ ((__packed__));
+
+struct slot_count_req {
+ /* In order to avoid weird sizeof edge cases. */
+ u8 unused;
+} __attribute__ ((__packed__));
+
+struct next_slot_req {
+ /* Slot unique id of the next slot in the iteration. */
+ u64 slot_uid;
+} __attribute__ ((__packed__));
+
+struct slot_info_req {
+ /* Slot unique id mapped to the slot to get information about. */
+ u64 slot_uid;
+} __attribute__ ((__packed__));
+
+struct slot_add_bulk_vcpus_req {
+ /* Slot unique id mapped to the slot to add vCPUs to. */
+ u64 slot_uid;
+
+ /* Number of vCPUs to add to the slot. */
+ u64 nr_vcpus;
+} __attribute__ ((__packed__));
+
+struct ne_pci_dev_cmd_reply {
+ s32 rc;
+
+ /* Valid for all commands except SLOT_COUNT. */
+ u64 slot_uid;
+
+ /* Valid for ENCLAVE_START command. */
+ u64 enclave_cid;
+
+ /* Valid for SLOT_COUNT command. */
+ u64 slot_count;
+
+ /* Valid for SLOT_ALLOC and SLOT_INFO commands. */
+ u64 mem_regions;
+
+ /* Valid for SLOT_INFO command. */
+ u64 mem_size;
+
+ /* Valid for SLOT_INFO command. */
+ u64 nr_vcpus;
+
+ /* Valid for SLOT_INFO command. */
+ u64 flags;
+
+ /* Valid for SLOT_INFO command. */
+ u16 state;
+} __attribute__ ((__packed__));
+
+/* Nitro Enclaves (NE) PCI device. */
+struct ne_pci_dev {
+ /* Variable set if a reply has been sent by the PCI device. */
+ atomic_t cmd_reply_avail;
+
+ /* Wait queue for handling command reply from the PCI device. */
+ wait_queue_head_t cmd_reply_wait_q;
+
+ /* List of the enclaves managed by the PCI device. */
+ struct list_head enclaves_list;
+
+ /* Mutex for accessing the list of enclaves. */
+ struct mutex enclaves_list_mutex;
+
+ /**
+ * Work queue for handling out-of-band events triggered by the Nitro
+ * Hypervisor which require enclave state scanning and propagation to
+ * the enclave process.
+ */
+ struct workqueue_struct *event_wq;
+
+ /* MMIO region of the PCI device. */
+ void __iomem *iomem_base;
+
+ /* Work item for every received out-of-band event. */
+ struct work_struct notify_work;
+
+ /* Mutex for accessing the PCI dev MMIO space. */
+ struct mutex pci_dev_mutex;
+};
+
+/**
+ * ne_do_request - Submit command request to the PCI device based on the command
+ * type and retrieve the associated reply.
+ *
+ * This function uses the ne_pci_dev mutex to handle one command at a time.
+ *
+ * @pdev: PCI device to send the command to and receive the reply from.
+ * @cmd_type: command type of the request sent to the PCI device.
+ * @cmd_request: command request payload.
+ * @cmd_request_size: size of the command request payload.
+ * @cmd_reply: command reply payload.
+ * @cmd_reply_size: size of the command reply payload.
+ *
+ * @returns: 0 on success, negative return value on failure.
+ */
+int ne_do_request(struct pci_dev *pdev, enum ne_pci_dev_cmd_type cmd_type,
+ void *cmd_request, size_t cmd_request_size,
+ struct ne_pci_dev_cmd_reply *cmd_reply,
+ size_t cmd_reply_size);
+
+/* Nitro Enclaves (NE) PCI device driver */
+extern struct pci_driver ne_pci_driver;
+
+#endif /* _NE_PCI_DEV_H_ */
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
The Nitro Enclaves driver keeps an internal info per each enclave.
This is needed to be able to manage enclave resources state, enclave
notifications and have a reference of the PCI device that handles
command requests for enclave lifetime management.
Signed-off-by: Alexandru-Catalin Vasile <[email protected]>
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the GPL additional wording as SPDX-License-Identifier is already in
place.
v1 -> v2
* Add enclave memory regions and vcpus count for enclave bookkeeping.
* Update ne_state comments to reflect NE_START_ENCLAVE ioctl naming update.
---
drivers/virt/nitro_enclaves/ne_misc_dev.h | 109 ++++++++++++++++++++++
1 file changed, 109 insertions(+)
create mode 100644 drivers/virt/nitro_enclaves/ne_misc_dev.h
diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.h b/drivers/virt/nitro_enclaves/ne_misc_dev.h
new file mode 100644
index 000000000000..6f1db85fc741
--- /dev/null
+++ b/drivers/virt/nitro_enclaves/ne_misc_dev.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+ */
+
+#ifndef _NE_MISC_DEV_H_
+#define _NE_MISC_DEV_H_
+
+#include <linux/cpumask.h>
+#include <linux/list.h>
+#include <linux/miscdevice.h>
+#include <linux/mm.h>
+#include <linux/mutex.h>
+#include <linux/pci.h>
+#include <linux/wait.h>
+
+/* Entry in vCPU IDs list. */
+struct ne_vcpu_id {
+ /* CPU id associated with a given slot, apic id on x86. */
+ u32 vcpu_id;
+
+ struct list_head vcpu_id_list_entry;
+};
+
+/* Entry in memory regions list. */
+struct ne_mem_region {
+ struct list_head mem_region_list_entry;
+
+ /* Number of pages that make up the memory region. */
+ unsigned long nr_pages;
+
+ /* Pages that make up the user space memory region. */
+ struct page **pages;
+};
+
+/* Per-enclave data used for enclave lifetime management. */
+struct ne_enclave {
+ /**
+ * CPU pool with siblings of already allocated CPUs to an enclave.
+ * This is used when a CPU pool is set, to be able to know the CPU
+ * siblings for the hyperthreading (HT) setup.
+ */
+ cpumask_var_t cpu_siblings;
+
+ struct list_head enclave_list_entry;
+
+ /* Mutex for accessing this internal state. */
+ struct mutex enclave_info_mutex;
+
+ /**
+ * Wait queue used for out-of-band event notifications
+ * triggered from the PCI device event handler to the enclave
+ * process via the poll function.
+ */
+ wait_queue_head_t eventq;
+
+ /* Variable used to determine if the out-of-band event was triggered. */
+ bool has_event;
+
+ /**
+ * The maximum number of memory regions that can be handled by the
+ * lower levels.
+ */
+ u64 max_mem_regions;
+
+ /* Enclave memory regions list. */
+ struct list_head mem_regions_list;
+
+ /* Enclave process abstraction mm data struct. */
+ struct mm_struct *mm;
+
+ /* Number of memory regions associated with the enclave. */
+ u64 nr_mem_regions;
+
+ /* Number of vcpus associated with the enclave. */
+ u64 nr_vcpus;
+
+ /* PCI device used for enclave lifetime management. */
+ struct pci_dev *pdev;
+
+ /* Slot unique id mapped to the enclave. */
+ u64 slot_uid;
+
+ /* Enclave state, updated during enclave lifetime. */
+ u16 state;
+
+ /* Enclave vCPUs list. */
+ struct list_head vcpu_ids_list;
+};
+
+/* States available for an enclave. */
+enum ne_state {
+ /* NE_START_ENCLAVE ioctl was never issued for the enclave. */
+ NE_STATE_INIT = 0,
+
+ /**
+ * NE_START_ENCLAVE ioctl was issued and the enclave is running
+ * as expected.
+ */
+ NE_STATE_RUNNING = 2,
+
+ /* Enclave exited without userspace interaction. */
+ NE_STATE_STOPPED = U16_MAX,
+};
+
+/* Nitro Enclaves (NE) misc device */
+extern struct miscdevice ne_miscdevice;
+
+#endif /* _NE_MISC_DEV_H_ */
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
The Nitro Enclaves PCI device is used by the kernel driver as a means of
communication with the hypervisor on the host where the primary VM and
the enclaves run. It handles requests with regard to enclave lifetime.
Setup the PCI device driver and add support for MSI-X interrupts.
Signed-off-by: Alexandru-Catalin Vasile <[email protected]>
Signed-off-by: Alexandru Ciobotaru <[email protected]>
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the GPL additional wording as SPDX-License-Identifier is already in
place.
* Remove the WARN_ON calls.
* Remove linux/bug include that is not needed.
* Update static calls sanity checks.
* Remove "ratelimited" from the logs that are not in the ioctl call paths.
* Update kzfree() calls to kfree().
v1 -> v2
* Add log pattern for NE.
* Update PCI device setup functions to receive PCI device data structure and
then get private data from it inside the functions logic.
* Remove the BUG_ON calls.
* Add teardown function for MSI-X setup.
* Update goto labels to match their purpose.
* Implement TODO for NE PCI device disable state check.
* Update function name for NE PCI device probe / remove.
---
drivers/virt/nitro_enclaves/ne_pci_dev.c | 252 +++++++++++++++++++++++
1 file changed, 252 insertions(+)
create mode 100644 drivers/virt/nitro_enclaves/ne_pci_dev.c
diff --git a/drivers/virt/nitro_enclaves/ne_pci_dev.c b/drivers/virt/nitro_enclaves/ne_pci_dev.c
new file mode 100644
index 000000000000..0b66166787b6
--- /dev/null
+++ b/drivers/virt/nitro_enclaves/ne_pci_dev.c
@@ -0,0 +1,252 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+ */
+
+/* Nitro Enclaves (NE) PCI device driver. */
+
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/module.h>
+#include <linux/nitro_enclaves.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+
+#include "ne_misc_dev.h"
+#include "ne_pci_dev.h"
+
+#define DEFAULT_TIMEOUT_MSECS (120000) /* 120 sec */
+
+#define NE "nitro_enclaves: "
+
+static const struct pci_device_id ne_pci_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, PCI_DEVICE_ID_NE) },
+ { 0, }
+};
+
+MODULE_DEVICE_TABLE(pci, ne_pci_ids);
+
+/**
+ * ne_setup_msix - Setup MSI-X vectors for the PCI device.
+ *
+ * @pdev: PCI device to setup the MSI-X for.
+ *
+ * @returns: 0 on success, negative return value on failure.
+ */
+static int ne_setup_msix(struct pci_dev *pdev)
+{
+ struct ne_pci_dev *ne_pci_dev = pci_get_drvdata(pdev);
+ int nr_vecs = 0;
+ int rc = -EINVAL;
+
+ nr_vecs = pci_msix_vec_count(pdev);
+ if (nr_vecs < 0) {
+ rc = nr_vecs;
+
+ dev_err(&pdev->dev, NE "Error in getting vec count [rc=%d]\n",
+ rc);
+
+ return rc;
+ }
+
+ rc = pci_alloc_irq_vectors(pdev, nr_vecs, nr_vecs, PCI_IRQ_MSIX);
+ if (rc < 0) {
+ dev_err(&pdev->dev, NE "Error in alloc MSI-X vecs [rc=%d]\n",
+ rc);
+
+ return rc;
+ }
+
+ return 0;
+}
+
+/**
+ * ne_teardown_msix - Teardown MSI-X vectors for the PCI device.
+ *
+ * @pdev: PCI device to teardown the MSI-X for.
+ */
+static void ne_teardown_msix(struct pci_dev *pdev)
+{
+ struct ne_pci_dev *ne_pci_dev = pci_get_drvdata(pdev);
+
+ pci_free_irq_vectors(pdev);
+}
+
+/**
+ * ne_pci_dev_enable - Select PCI device version and enable it.
+ *
+ * @pdev: PCI device to select version for and then enable.
+ *
+ * @returns: 0 on success, negative return value on failure.
+ */
+static int ne_pci_dev_enable(struct pci_dev *pdev)
+{
+ u8 dev_enable_reply = 0;
+ u16 dev_version_reply = 0;
+ struct ne_pci_dev *ne_pci_dev = pci_get_drvdata(pdev);
+
+ iowrite16(NE_VERSION_MAX, ne_pci_dev->iomem_base + NE_VERSION);
+
+ dev_version_reply = ioread16(ne_pci_dev->iomem_base + NE_VERSION);
+ if (dev_version_reply != NE_VERSION_MAX) {
+ dev_err(&pdev->dev, NE "Error in pci dev version cmd\n");
+
+ return -EIO;
+ }
+
+ iowrite8(NE_ENABLE_ON, ne_pci_dev->iomem_base + NE_ENABLE);
+
+ dev_enable_reply = ioread8(ne_pci_dev->iomem_base + NE_ENABLE);
+ if (dev_enable_reply != NE_ENABLE_ON) {
+ dev_err(&pdev->dev, NE "Error in pci dev enable cmd\n");
+
+ return -EIO;
+ }
+
+ return 0;
+}
+
+/**
+ * ne_pci_dev_disable - Disable PCI device.
+ *
+ * @pdev: PCI device to disable.
+ */
+static void ne_pci_dev_disable(struct pci_dev *pdev)
+{
+ u8 dev_disable_reply = 0;
+ struct ne_pci_dev *ne_pci_dev = pci_get_drvdata(pdev);
+ const unsigned int sleep_time = 10; /* 10 ms */
+ unsigned int sleep_time_count = 0;
+
+ iowrite8(NE_ENABLE_OFF, ne_pci_dev->iomem_base + NE_ENABLE);
+
+ /*
+ * Check for NE_ENABLE_OFF in a loop, to handle cases when the device
+ * state is not immediately set to disabled and going through a
+ * transitory state of disabling.
+ */
+ while (sleep_time_count < DEFAULT_TIMEOUT_MSECS) {
+ dev_disable_reply = ioread8(ne_pci_dev->iomem_base + NE_ENABLE);
+ if (dev_disable_reply == NE_ENABLE_OFF)
+ return;
+
+ msleep_interruptible(sleep_time);
+ sleep_time_count += sleep_time;
+ }
+
+ dev_disable_reply = ioread8(ne_pci_dev->iomem_base + NE_ENABLE);
+ if (dev_disable_reply != NE_ENABLE_OFF)
+ dev_err(&pdev->dev, NE "Error in pci dev disable cmd\n");
+}
+
+static int ne_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct ne_pci_dev *ne_pci_dev = NULL;
+ int rc = -EINVAL;
+
+ ne_pci_dev = kzalloc(sizeof(*ne_pci_dev), GFP_KERNEL);
+ if (!ne_pci_dev)
+ return -ENOMEM;
+
+ rc = pci_enable_device(pdev);
+ if (rc < 0) {
+ dev_err(&pdev->dev, NE "Error in pci dev enable [rc=%d]\n", rc);
+
+ goto free_ne_pci_dev;
+ }
+
+ rc = pci_request_regions_exclusive(pdev, "ne_pci_dev");
+ if (rc < 0) {
+ dev_err(&pdev->dev, NE "Error in pci request regions [rc=%d]\n",
+ rc);
+
+ goto disable_pci_dev;
+ }
+
+ ne_pci_dev->iomem_base = pci_iomap(pdev, PCI_BAR_NE, 0);
+ if (!ne_pci_dev->iomem_base) {
+ rc = -ENOMEM;
+
+ dev_err(&pdev->dev, NE "Error in pci iomap [rc=%d]\n", rc);
+
+ goto release_pci_regions;
+ }
+
+ pci_set_drvdata(pdev, ne_pci_dev);
+
+ rc = ne_setup_msix(pdev);
+ if (rc < 0) {
+ dev_err(&pdev->dev, NE "Error in pci dev msix setup [rc=%d]\n",
+ rc);
+
+ goto iounmap_pci_bar;
+ }
+
+ ne_pci_dev_disable(pdev);
+
+ rc = ne_pci_dev_enable(pdev);
+ if (rc < 0) {
+ dev_err(&pdev->dev, NE "Error in ne_pci_dev enable [rc=%d]\n",
+ rc);
+
+ goto teardown_msix;
+ }
+
+ atomic_set(&ne_pci_dev->cmd_reply_avail, 0);
+ init_waitqueue_head(&ne_pci_dev->cmd_reply_wait_q);
+ INIT_LIST_HEAD(&ne_pci_dev->enclaves_list);
+ mutex_init(&ne_pci_dev->enclaves_list_mutex);
+ mutex_init(&ne_pci_dev->pci_dev_mutex);
+
+ return 0;
+
+teardown_msix:
+ ne_teardown_msix(pdev);
+iounmap_pci_bar:
+ pci_set_drvdata(pdev, NULL);
+ pci_iounmap(pdev, ne_pci_dev->iomem_base);
+release_pci_regions:
+ pci_release_regions(pdev);
+disable_pci_dev:
+ pci_disable_device(pdev);
+free_ne_pci_dev:
+ kfree(ne_pci_dev);
+
+ return rc;
+}
+
+static void ne_pci_remove(struct pci_dev *pdev)
+{
+ struct ne_pci_dev *ne_pci_dev = pci_get_drvdata(pdev);
+
+ if (!ne_pci_dev || !ne_pci_dev->iomem_base)
+ return;
+
+ ne_pci_dev_disable(pdev);
+
+ ne_teardown_msix(pdev);
+
+ pci_set_drvdata(pdev, NULL);
+
+ pci_iounmap(pdev, ne_pci_dev->iomem_base);
+
+ pci_release_regions(pdev);
+
+ pci_disable_device(pdev);
+
+ kfree(ne_pci_dev);
+}
+
+/*
+ * TODO: Add suspend / resume functions for power management w/ CONFIG_PM, if
+ * needed.
+ */
+struct pci_driver ne_pci_driver = {
+ .name = "ne_pci_dev",
+ .id_table = ne_pci_ids,
+ .probe = ne_pci_probe,
+ .remove = ne_pci_remove,
+};
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
The Nitro Enclaves PCI device exposes a MMIO space that this driver
uses to submit command requests and to receive command replies e.g. for
enclave creation / termination or setting enclave resources.
Add logic for handling PCI device command requests based on the given
command type.
Register an MSI-X interrupt vector for command reply notifications to
handle this type of communication events.
Signed-off-by: Alexandru-Catalin Vasile <[email protected]>
Signed-off-by: Andra Paraschiv <[email protected]>
Fix issue reported in:
https://lore.kernel.org/lkml/202004231644.xTmN4Z1z%[email protected]/
Reported-by: kbuild test robot <[email protected]>
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the WARN_ON calls.
* Update static calls sanity checks.
* Remove "ratelimited" from the logs that are not in the ioctl call paths.
v1 -> v2
* Add log pattern for NE.
* Remove the BUG_ON calls.
* Update goto labels to match their purpose.
* Add fix for kbuild report.
---
drivers/virt/nitro_enclaves/ne_pci_dev.c | 224 +++++++++++++++++++++++
1 file changed, 224 insertions(+)
diff --git a/drivers/virt/nitro_enclaves/ne_pci_dev.c b/drivers/virt/nitro_enclaves/ne_pci_dev.c
index 0b66166787b6..5e8bfda4bd0f 100644
--- a/drivers/virt/nitro_enclaves/ne_pci_dev.c
+++ b/drivers/virt/nitro_enclaves/ne_pci_dev.c
@@ -29,6 +29,209 @@ static const struct pci_device_id ne_pci_ids[] = {
MODULE_DEVICE_TABLE(pci, ne_pci_ids);
+/**
+ * ne_submit_request - Submit command request to the PCI device based on the
+ * command type.
+ *
+ * This function gets called with the ne_pci_dev mutex held.
+ *
+ * @pdev: PCI device to send the command to.
+ * @cmd_type: command type of the request sent to the PCI device.
+ * @cmd_request: command request payload.
+ * @cmd_request_size: size of the command request payload.
+ *
+ * @returns: 0 on success, negative return value on failure.
+ */
+static int ne_submit_request(struct pci_dev *pdev,
+ enum ne_pci_dev_cmd_type cmd_type,
+ void *cmd_request, size_t cmd_request_size)
+{
+ struct ne_pci_dev *ne_pci_dev = pci_get_drvdata(pdev);
+
+ memcpy_toio(ne_pci_dev->iomem_base + NE_SEND_DATA, cmd_request,
+ cmd_request_size);
+
+ iowrite32(cmd_type, ne_pci_dev->iomem_base + NE_COMMAND);
+
+ return 0;
+}
+
+/**
+ * ne_retrieve_reply - Retrieve reply from the PCI device.
+ *
+ * This function gets called with the ne_pci_dev mutex held.
+ *
+ * @pdev: PCI device to receive the reply from.
+ * @cmd_reply: command reply payload.
+ * @cmd_reply_size: size of the command reply payload.
+ *
+ * @returns: 0 on success, negative return value on failure.
+ */
+static int ne_retrieve_reply(struct pci_dev *pdev,
+ struct ne_pci_dev_cmd_reply *cmd_reply,
+ size_t cmd_reply_size)
+{
+ struct ne_pci_dev *ne_pci_dev = pci_get_drvdata(pdev);
+
+ memcpy_fromio(cmd_reply, ne_pci_dev->iomem_base + NE_RECV_DATA,
+ cmd_reply_size);
+
+ return 0;
+}
+
+/**
+ * ne_wait_for_reply - Wait for a reply of a PCI command.
+ *
+ * This function gets called with the ne_pci_dev mutex held.
+ *
+ * @pdev: PCI device for which a reply is waited.
+ *
+ * @returns: 0 on success, negative return value on failure.
+ */
+static int ne_wait_for_reply(struct pci_dev *pdev)
+{
+ struct ne_pci_dev *ne_pci_dev = pci_get_drvdata(pdev);
+ int rc = -EINVAL;
+
+ /*
+ * TODO: Update to _interruptible and handle interrupted wait event
+ * e.g. -ERESTARTSYS, incoming signals + add / update timeout.
+ */
+ rc = wait_event_timeout(ne_pci_dev->cmd_reply_wait_q,
+ atomic_read(&ne_pci_dev->cmd_reply_avail) != 0,
+ msecs_to_jiffies(DEFAULT_TIMEOUT_MSECS));
+ if (!rc)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+int ne_do_request(struct pci_dev *pdev, enum ne_pci_dev_cmd_type cmd_type,
+ void *cmd_request, size_t cmd_request_size,
+ struct ne_pci_dev_cmd_reply *cmd_reply, size_t cmd_reply_size)
+{
+ struct ne_pci_dev *ne_pci_dev = NULL;
+ int rc = -EINVAL;
+
+ if (!pdev)
+ return -EINVAL;
+
+ ne_pci_dev = pci_get_drvdata(pdev);
+ if (!ne_pci_dev || !ne_pci_dev->iomem_base)
+ return -EINVAL;
+
+ if (cmd_type <= INVALID_CMD || cmd_type >= MAX_CMD) {
+ dev_err_ratelimited(&pdev->dev, NE "Invalid cmd type=%u\n",
+ cmd_type);
+
+ return -EINVAL;
+ }
+
+ if (!cmd_request) {
+ dev_err_ratelimited(&pdev->dev, NE "Null cmd request\n");
+
+ return -EINVAL;
+ }
+
+ if (cmd_request_size > NE_SEND_DATA_SIZE) {
+ dev_err_ratelimited(&pdev->dev,
+ NE "Invalid req size=%zu for cmd type=%u\n",
+ cmd_request_size, cmd_type);
+
+ return -EINVAL;
+ }
+
+ if (!cmd_reply) {
+ dev_err_ratelimited(&pdev->dev, NE "Null cmd reply\n");
+
+ return -EINVAL;
+ }
+
+ if (cmd_reply_size > NE_RECV_DATA_SIZE) {
+ dev_err_ratelimited(&pdev->dev, NE "Invalid reply size=%zu\n",
+ cmd_reply_size);
+
+ return -EINVAL;
+ }
+
+ /*
+ * Use this mutex so that the PCI device handles one command request at
+ * a time.
+ */
+ mutex_lock(&ne_pci_dev->pci_dev_mutex);
+
+ atomic_set(&ne_pci_dev->cmd_reply_avail, 0);
+
+ rc = ne_submit_request(pdev, cmd_type, cmd_request, cmd_request_size);
+ if (rc < 0) {
+ dev_err_ratelimited(&pdev->dev,
+ NE "Error in submit request [rc=%d]\n",
+ rc);
+
+ goto unlock_mutex;
+ }
+
+ rc = ne_wait_for_reply(pdev);
+ if (rc < 0) {
+ dev_err_ratelimited(&pdev->dev,
+ NE "Error in wait for reply [rc=%d]\n",
+ rc);
+
+ goto unlock_mutex;
+ }
+
+ rc = ne_retrieve_reply(pdev, cmd_reply, cmd_reply_size);
+ if (rc < 0) {
+ dev_err_ratelimited(&pdev->dev,
+ NE "Error in retrieve reply [rc=%d]\n",
+ rc);
+
+ goto unlock_mutex;
+ }
+
+ atomic_set(&ne_pci_dev->cmd_reply_avail, 0);
+
+ if (cmd_reply->rc < 0) {
+ dev_err_ratelimited(&pdev->dev,
+ NE "Error in cmd process logic [rc=%d]\n",
+ cmd_reply->rc);
+
+ rc = cmd_reply->rc;
+
+ goto unlock_mutex;
+ }
+
+ mutex_unlock(&ne_pci_dev->pci_dev_mutex);
+
+ return 0;
+
+unlock_mutex:
+ mutex_unlock(&ne_pci_dev->pci_dev_mutex);
+
+ return rc;
+}
+
+/**
+ * ne_reply_handler - Interrupt handler for retrieving a reply matching
+ * a request sent to the PCI device for enclave lifetime management.
+ *
+ * @irq: received interrupt for a reply sent by the PCI device.
+ * @args: PCI device private data structure.
+ *
+ * @returns: IRQ_HANDLED on handled interrupt, IRQ_NONE otherwise.
+ */
+static irqreturn_t ne_reply_handler(int irq, void *args)
+{
+ struct ne_pci_dev *ne_pci_dev = (struct ne_pci_dev *)args;
+
+ atomic_set(&ne_pci_dev->cmd_reply_avail, 1);
+
+ /* TODO: Update to _interruptible. */
+ wake_up(&ne_pci_dev->cmd_reply_wait_q);
+
+ return IRQ_HANDLED;
+}
+
/**
* ne_setup_msix - Setup MSI-X vectors for the PCI device.
*
@@ -60,7 +263,26 @@ static int ne_setup_msix(struct pci_dev *pdev)
return rc;
}
+ /*
+ * This IRQ gets triggered every time the PCI device responds to a
+ * command request. The reply is then retrieved, reading from the MMIO
+ * space of the PCI device.
+ */
+ rc = request_irq(pci_irq_vector(pdev, NE_VEC_REPLY),
+ ne_reply_handler, 0, "enclave_cmd", ne_pci_dev);
+ if (rc < 0) {
+ dev_err(&pdev->dev, NE "Error in request irq reply [rc=%d]\n",
+ rc);
+
+ goto free_irq_vectors;
+ }
+
return 0;
+
+free_irq_vectors:
+ pci_free_irq_vectors(pdev);
+
+ return rc;
}
/**
@@ -72,6 +294,8 @@ static void ne_teardown_msix(struct pci_dev *pdev)
{
struct ne_pci_dev *ne_pci_dev = pci_get_drvdata(pdev);
+ free_irq(pci_irq_vector(pdev, NE_VEC_REPLY), ne_pci_dev);
+
pci_free_irq_vectors(pdev);
}
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
In addition to the replies sent by the Nitro Enclaves PCI device in
response to command requests, out-of-band enclave events can happen e.g.
an enclave crashes. In this case, the Nitro Enclaves driver needs to be
aware of the event and notify the corresponding user space process that
abstracts the enclave.
Register an MSI-X interrupt vector to be used for this kind of
out-of-band events. The interrupt notifies that the state of an enclave
changed and the driver logic scans the state of each running enclave to
identify for which this notification is intended.
Create an workqueue to handle the out-of-band events. Notify user space
enclave process that is using a polling mechanism on the enclave fd. The
enclave fd is returned as a result of KVM_CREATE_VM ioctl call.
Signed-off-by: Alexandru-Catalin Vasile <[email protected]>
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the WARN_ON calls.
* Update static calls sanity checks.
* Remove "ratelimited" from the logs that are not in the ioctl call paths.
v1 -> v2
* Add log pattern for NE.
* Update goto labels to match their purpose.
---
drivers/virt/nitro_enclaves/ne_pci_dev.c | 118 +++++++++++++++++++++++
1 file changed, 118 insertions(+)
diff --git a/drivers/virt/nitro_enclaves/ne_pci_dev.c b/drivers/virt/nitro_enclaves/ne_pci_dev.c
index 5e8bfda4bd0f..fd796450c9eb 100644
--- a/drivers/virt/nitro_enclaves/ne_pci_dev.c
+++ b/drivers/virt/nitro_enclaves/ne_pci_dev.c
@@ -232,6 +232,87 @@ static irqreturn_t ne_reply_handler(int irq, void *args)
return IRQ_HANDLED;
}
+/**
+ * ne_event_work_handler - Work queue handler for notifying enclaves on
+ * a state change received by the event interrupt handler.
+ *
+ * An out-of-band event is being issued by the Nitro Hypervisor when at least
+ * one enclave is changing state without client interaction.
+ *
+ * @work: item containing the Nitro Enclaves PCI device for which a
+ * out-of-band event was issued.
+ */
+static void ne_event_work_handler(struct work_struct *work)
+{
+ struct ne_pci_dev_cmd_reply cmd_reply = {};
+ struct ne_enclave *ne_enclave = NULL;
+ struct ne_pci_dev *ne_pci_dev =
+ container_of(work, struct ne_pci_dev, notify_work);
+ int rc = -EINVAL;
+ struct slot_info_req slot_info_req = {};
+
+ mutex_lock(&ne_pci_dev->enclaves_list_mutex);
+
+ /*
+ * Iterate over all enclaves registered for the Nitro Enclaves
+ * PCI device and determine for which enclave(s) the out-of-band event
+ * is corresponding to.
+ */
+ list_for_each_entry(ne_enclave, &ne_pci_dev->enclaves_list,
+ enclave_list_entry) {
+ mutex_lock(&ne_enclave->enclave_info_mutex);
+
+ /*
+ * Enclaves that were never started cannot receive out-of-band
+ * events.
+ */
+ if (ne_enclave->state != NE_STATE_RUNNING)
+ goto unlock;
+
+ slot_info_req.slot_uid = ne_enclave->slot_uid;
+
+ rc = ne_do_request(ne_enclave->pdev, SLOT_INFO, &slot_info_req,
+ sizeof(slot_info_req), &cmd_reply,
+ sizeof(cmd_reply));
+ if (rc < 0)
+ dev_err(&ne_enclave->pdev->dev,
+ NE "Error in slot info [rc=%d]\n", rc);
+
+ /* Notify enclave process that the enclave state changed. */
+ if (ne_enclave->state != cmd_reply.state) {
+ ne_enclave->state = cmd_reply.state;
+
+ ne_enclave->has_event = true;
+
+ wake_up_interruptible(&ne_enclave->eventq);
+ }
+
+unlock:
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+ }
+
+ mutex_unlock(&ne_pci_dev->enclaves_list_mutex);
+}
+
+/**
+ * ne_event_handler - Interrupt handler for PCI device out-of-band
+ * events. This interrupt does not supply any data in the MMIO region.
+ * It notifies a change in the state of any of the launched enclaves.
+ *
+ * @irq: received interrupt for an out-of-band event.
+ * @args: PCI device private data structure.
+ *
+ * @returns: IRQ_HANDLED on handled interrupt, IRQ_NONE otherwise.
+ */
+static irqreturn_t ne_event_handler(int irq, void *args)
+{
+ struct ne_pci_dev *ne_pci_dev = (struct ne_pci_dev *)args;
+
+ queue_work(ne_pci_dev->event_wq, &ne_pci_dev->notify_work);
+
+ return IRQ_HANDLED;
+}
+
/**
* ne_setup_msix - Setup MSI-X vectors for the PCI device.
*
@@ -277,8 +358,38 @@ static int ne_setup_msix(struct pci_dev *pdev)
goto free_irq_vectors;
}
+ ne_pci_dev->event_wq = create_singlethread_workqueue("ne_pci_dev_wq");
+ if (!ne_pci_dev->event_wq) {
+ rc = -ENOMEM;
+
+ dev_err(&pdev->dev, NE "Cannot get wq for dev events [rc=%d]\n",
+ rc);
+
+ goto free_reply_irq_vec;
+ }
+
+ INIT_WORK(&ne_pci_dev->notify_work, ne_event_work_handler);
+
+ /*
+ * This IRQ gets triggered every time any enclave's state changes. Its
+ * handler then scans for the changes and propagates them to the user
+ * space.
+ */
+ rc = request_irq(pci_irq_vector(pdev, NE_VEC_EVENT),
+ ne_event_handler, 0, "enclave_evt", ne_pci_dev);
+ if (rc < 0) {
+ dev_err(&pdev->dev, NE "Error in request irq event [rc=%d]\n",
+ rc);
+
+ goto destroy_wq;
+ }
+
return 0;
+destroy_wq:
+ destroy_workqueue(ne_pci_dev->event_wq);
+free_reply_irq_vec:
+ free_irq(pci_irq_vector(pdev, NE_VEC_REPLY), ne_pci_dev);
free_irq_vectors:
pci_free_irq_vectors(pdev);
@@ -294,6 +405,13 @@ static void ne_teardown_msix(struct pci_dev *pdev)
{
struct ne_pci_dev *ne_pci_dev = pci_get_drvdata(pdev);
+
+ free_irq(pci_irq_vector(pdev, NE_VEC_EVENT), ne_pci_dev);
+
+ flush_work(&ne_pci_dev->notify_work);
+ flush_workqueue(ne_pci_dev->event_wq);
+ destroy_workqueue(ne_pci_dev->event_wq);
+
free_irq(pci_irq_vector(pdev, NE_VEC_REPLY), ne_pci_dev);
pci_free_irq_vectors(pdev);
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
The Nitro Enclaves driver provides an ioctl interface to the user space
for enclave lifetime management e.g. enclave creation / termination and
setting enclave resources such as memory and CPU.
This ioctl interface is mapped to a Nitro Enclaves misc device.
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the GPL additional wording as SPDX-License-Identifier is already in
place.
* Remove the WARN_ON calls.
* Remove linux/bug and linux/kvm_host includes that are not needed.
* Remove "ratelimited" from the logs that are not in the ioctl call paths.
* Remove file ops that do nothing for now - open and release.
v1 -> v2
* Add log pattern for NE.
* Update goto labels to match their purpose.
* Update ne_cpu_pool data structure to include the global mutex.
* Update NE misc device mode to 0660.
* Check if the CPU siblings are included in the NE CPU pool, as full CPU cores
are given for the enclave(s).
---
drivers/virt/nitro_enclaves/ne_misc_dev.c | 165 ++++++++++++++++++++++
drivers/virt/nitro_enclaves/ne_pci_dev.c | 12 ++
2 files changed, 177 insertions(+)
create mode 100644 drivers/virt/nitro_enclaves/ne_misc_dev.c
diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c b/drivers/virt/nitro_enclaves/ne_misc_dev.c
new file mode 100644
index 000000000000..4a6f74cacdb0
--- /dev/null
+++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c
@@ -0,0 +1,165 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+ */
+
+/**
+ * Enclave lifetime management driver for Nitro Enclaves (NE).
+ * Nitro is a hypervisor that has been developed by Amazon.
+ */
+
+#include <linux/anon_inodes.h>
+#include <linux/cpu.h>
+#include <linux/device.h>
+#include <linux/file.h>
+#include <linux/hugetlb.h>
+#include <linux/list.h>
+#include <linux/miscdevice.h>
+#include <linux/mm.h>
+#include <linux/mman.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/nitro_enclaves.h>
+#include <linux/pci.h>
+#include <linux/poll.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "ne_misc_dev.h"
+#include "ne_pci_dev.h"
+
+#define MIN_MEM_REGION_SIZE (2 * 1024UL * 1024UL)
+
+#define NE "nitro_enclaves: "
+
+#define NE_DEV_NAME "nitro_enclaves"
+
+#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
+
+static char *ne_cpus;
+module_param(ne_cpus, charp, 0644);
+MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
+
+/* CPU pool used for Nitro Enclaves. */
+struct ne_cpu_pool {
+ /* Available CPUs in the pool. */
+ cpumask_var_t avail;
+ struct mutex mutex;
+};
+
+static struct ne_cpu_pool ne_cpu_pool;
+
+static long ne_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+
+ default:
+ return -ENOTTY;
+ }
+
+ return 0;
+}
+
+static const struct file_operations ne_fops = {
+ .owner = THIS_MODULE,
+ .llseek = noop_llseek,
+ .unlocked_ioctl = ne_ioctl,
+};
+
+struct miscdevice ne_miscdevice = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = NE_DEV_NAME,
+ .fops = &ne_fops,
+ .mode = 0660,
+};
+
+static int __init ne_init(void)
+{
+ unsigned int cpu = 0;
+ unsigned int cpu_sibling = 0;
+ int rc = -EINVAL;
+
+ if (!zalloc_cpumask_var(&ne_cpu_pool.avail, GFP_KERNEL))
+ return -ENOMEM;
+
+ mutex_init(&ne_cpu_pool.mutex);
+
+ rc = cpulist_parse(ne_cpus, ne_cpu_pool.avail);
+ if (rc < 0) {
+ pr_err(NE "Error in cpulist parse [rc=%d]\n", rc);
+
+ goto free_cpumask;
+ }
+
+ /*
+ * Check if CPU siblings are included in the provided CPU pool. The
+ * expectation is that CPU cores are made available in the CPU pool for
+ * enclaves.
+ */
+ for_each_cpu(cpu, ne_cpu_pool.avail) {
+ for_each_cpu(cpu_sibling, topology_sibling_cpumask(cpu)) {
+ if (!cpumask_test_cpu(cpu_sibling, ne_cpu_pool.avail)) {
+ pr_err(NE "CPU %d is not in the CPU pool\n",
+ cpu_sibling);
+
+ rc = -EINVAL;
+
+ goto free_cpumask;
+ }
+ }
+ }
+
+ for_each_cpu(cpu, ne_cpu_pool.avail) {
+ rc = remove_cpu(cpu);
+ if (rc != 0) {
+ pr_err(NE "CPU %d not offlined [rc=%d]\n", cpu, rc);
+
+ goto online_cpus;
+ }
+ }
+
+ rc = pci_register_driver(&ne_pci_driver);
+ if (rc < 0) {
+ pr_err(NE "Error in pci register driver [rc=%d]\n", rc);
+
+ goto online_cpus;
+ }
+
+ return 0;
+
+online_cpus:
+ for_each_cpu(cpu, ne_cpu_pool.avail)
+ add_cpu(cpu);
+free_cpumask:
+ free_cpumask_var(ne_cpu_pool.avail);
+
+ return rc;
+}
+
+static void __exit ne_exit(void)
+{
+ unsigned int cpu = 0;
+ int rc = -EINVAL;
+
+ pci_unregister_driver(&ne_pci_driver);
+
+ if (!ne_cpu_pool.avail)
+ return;
+
+ for_each_cpu(cpu, ne_cpu_pool.avail) {
+ rc = add_cpu(cpu);
+ if (rc != 0)
+ pr_err(NE "CPU %d not onlined [rc=%d]\n", cpu, rc);
+ }
+
+ free_cpumask_var(ne_cpu_pool.avail);
+}
+
+/* TODO: Handle actions such as reboot, kexec. */
+
+module_init(ne_init);
+module_exit(ne_exit);
+
+MODULE_AUTHOR("Amazon.com, Inc. or its affiliates");
+MODULE_DESCRIPTION("Nitro Enclaves Driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/virt/nitro_enclaves/ne_pci_dev.c b/drivers/virt/nitro_enclaves/ne_pci_dev.c
index fd796450c9eb..01213e2cfed8 100644
--- a/drivers/virt/nitro_enclaves/ne_pci_dev.c
+++ b/drivers/virt/nitro_enclaves/ne_pci_dev.c
@@ -537,6 +537,14 @@ static int ne_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto teardown_msix;
}
+ rc = misc_register(&ne_miscdevice);
+ if (rc < 0) {
+ dev_err(&pdev->dev, NE "Error in misc dev register [rc=%d]\n",
+ rc);
+
+ goto disable_ne_pci_dev;
+ }
+
atomic_set(&ne_pci_dev->cmd_reply_avail, 0);
init_waitqueue_head(&ne_pci_dev->cmd_reply_wait_q);
INIT_LIST_HEAD(&ne_pci_dev->enclaves_list);
@@ -545,6 +553,8 @@ static int ne_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return 0;
+disable_ne_pci_dev:
+ ne_pci_dev_disable(pdev);
teardown_msix:
ne_teardown_msix(pdev);
iounmap_pci_bar:
@@ -567,6 +577,8 @@ static void ne_pci_remove(struct pci_dev *pdev)
if (!ne_pci_dev || !ne_pci_dev->iomem_base)
return;
+ misc_deregister(&ne_miscdevice);
+
ne_pci_dev_disable(pdev);
ne_teardown_msix(pdev);
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
An enclave, before being started, has its resources set. One of its
resources is CPU.
Add ioctl command logic for enclave vCPU creation. Return as result a
file descriptor that is associated with the enclave vCPU.
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the WARN_ON calls.
* Update static calls sanity checks.
* Update kzfree() calls to kfree().
* Remove file ops that do nothing for now - open, ioctl and release.
v1 -> v2
* Add log pattern for NE.
* Update goto labels to match their purpose.
* Remove the BUG_ON calls.
* Check if enclave state is init when setting enclave vcpu.
---
drivers/virt/nitro_enclaves/ne_misc_dev.c | 190 ++++++++++++++++++++++
1 file changed, 190 insertions(+)
diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c b/drivers/virt/nitro_enclaves/ne_misc_dev.c
index 8cf1c4ee1812..6db88e128d1f 100644
--- a/drivers/virt/nitro_enclaves/ne_misc_dev.c
+++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c
@@ -49,10 +49,200 @@ struct ne_cpu_pool {
static struct ne_cpu_pool ne_cpu_pool;
+static const struct file_operations ne_enclave_vcpu_fops = {
+ .owner = THIS_MODULE,
+ .llseek = noop_llseek,
+};
+
+/**
+ * ne_get_cpu_from_cpu_pool - Get a CPU from the CPU pool, if it is set.
+ *
+ * This function gets called with the ne_enclave mutex held.
+ *
+ * @ne_enclave: private data associated with the current enclave.
+ * @vcpu_id: id of the CPU to be associated with the given slot, apic id on x86.
+ *
+ * @returns: 0 on success, negative return value on failure.
+ */
+static int ne_get_cpu_from_cpu_pool(struct ne_enclave *ne_enclave, u32 *vcpu_id)
+{
+ unsigned int cpu = 0;
+ unsigned int cpu_sibling = 0;
+
+ /* There are CPU siblings available to choose from. */
+ cpu = cpumask_any(ne_enclave->cpu_siblings);
+ if (cpu < nr_cpu_ids) {
+ cpumask_clear_cpu(cpu, ne_enclave->cpu_siblings);
+
+ *vcpu_id = cpu;
+
+ return 0;
+ }
+
+ mutex_lock(&ne_cpu_pool.mutex);
+
+ /* Choose any CPU from the available CPU pool. */
+ cpu = cpumask_any(ne_cpu_pool.avail);
+ if (cpu >= nr_cpu_ids) {
+ pr_err_ratelimited(NE "No CPUs available in CPU pool\n");
+
+ mutex_unlock(&ne_cpu_pool.mutex);
+
+ return -EINVAL;
+ }
+
+ cpumask_clear_cpu(cpu, ne_cpu_pool.avail);
+
+ /*
+ * Make sure the CPU siblings are not marked as
+ * available anymore.
+ */
+ for_each_cpu(cpu_sibling, topology_sibling_cpumask(cpu)) {
+ if (cpu_sibling != cpu) {
+ cpumask_clear_cpu(cpu_sibling, ne_cpu_pool.avail);
+
+ cpumask_set_cpu(cpu_sibling, ne_enclave->cpu_siblings);
+ }
+ }
+
+ mutex_unlock(&ne_cpu_pool.mutex);
+
+ *vcpu_id = cpu;
+
+ return 0;
+}
+
+/**
+ * ne_create_vcpu_ioctl - Add vCPU to the slot associated with the current
+ * enclave. Create vCPU file descriptor to be further used for CPU handling.
+ *
+ * This function gets called with the ne_enclave mutex held.
+ *
+ * @ne_enclave: private data associated with the current enclave.
+ * @vcpu_id: id of the CPU to be associated with the given slot, apic id on x86.
+ *
+ * @returns: vCPU fd on success, negative return value on failure.
+ */
+static int ne_create_vcpu_ioctl(struct ne_enclave *ne_enclave, u32 vcpu_id)
+{
+ struct ne_pci_dev_cmd_reply cmd_reply = {};
+ int fd = 0;
+ struct file *file = NULL;
+ struct ne_vcpu_id *ne_vcpu_id = NULL;
+ int rc = -EINVAL;
+ struct slot_add_vcpu_req slot_add_vcpu_req = {};
+
+ if (ne_enclave->mm != current->mm)
+ return -EIO;
+
+ ne_vcpu_id = kzalloc(sizeof(*ne_vcpu_id), GFP_KERNEL);
+ if (!ne_vcpu_id)
+ return -ENOMEM;
+
+ fd = get_unused_fd_flags(O_CLOEXEC);
+ if (fd < 0) {
+ rc = fd;
+
+ pr_err_ratelimited(NE "Error in getting unused fd [rc=%d]\n",
+ rc);
+
+ goto free_ne_vcpu_id;
+ }
+
+ /* TODO: Include (vcpu) id in the ne-vm-vcpu naming. */
+ file = anon_inode_getfile("ne-vm-vcpu", &ne_enclave_vcpu_fops,
+ ne_enclave, O_RDWR);
+ if (IS_ERR(file)) {
+ rc = PTR_ERR(file);
+
+ pr_err_ratelimited(NE "Error in anon inode get file [rc=%d]\n",
+ rc);
+
+ goto put_fd;
+ }
+
+ slot_add_vcpu_req.slot_uid = ne_enclave->slot_uid;
+ slot_add_vcpu_req.vcpu_id = vcpu_id;
+
+ rc = ne_do_request(ne_enclave->pdev, SLOT_ADD_VCPU, &slot_add_vcpu_req,
+ sizeof(slot_add_vcpu_req), &cmd_reply,
+ sizeof(cmd_reply));
+ if (rc < 0) {
+ pr_err_ratelimited(NE "Error in slot add vcpu [rc=%d]\n", rc);
+
+ goto put_file;
+ }
+
+ ne_vcpu_id->vcpu_id = vcpu_id;
+
+ list_add(&ne_vcpu_id->vcpu_id_list_entry, &ne_enclave->vcpu_ids_list);
+
+ ne_enclave->nr_vcpus++;
+
+ fd_install(fd, file);
+
+ return fd;
+
+put_file:
+ fput(file);
+put_fd:
+ put_unused_fd(fd);
+free_ne_vcpu_id:
+ kfree(ne_vcpu_id);
+
+ return rc;
+}
+
static long ne_enclave_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
+ struct ne_enclave *ne_enclave = file->private_data;
+
+ if (!ne_enclave || !ne_enclave->pdev)
+ return -EINVAL;
+
switch (cmd) {
+ case KVM_CREATE_VCPU: {
+ int rc = -EINVAL;
+ u32 vcpu_id = 0;
+
+ if (copy_from_user(&vcpu_id, (void *)arg, sizeof(vcpu_id))) {
+ pr_err_ratelimited(NE "Error in copy from user\n");
+
+ return -EFAULT;
+ }
+
+ mutex_lock(&ne_enclave->enclave_info_mutex);
+
+ if (ne_enclave->state != NE_STATE_INIT) {
+ pr_err_ratelimited(NE "Enclave isn't in init state\n");
+
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+
+ return -EINVAL;
+ }
+
+ /* Use the CPU pool for choosing a CPU for the enclave. */
+ rc = ne_get_cpu_from_cpu_pool(ne_enclave, &vcpu_id);
+ if (rc < 0) {
+ pr_err_ratelimited(NE "Error in get CPU from pool\n");
+
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+
+ return -EINVAL;
+ }
+
+ rc = ne_create_vcpu_ioctl(ne_enclave, vcpu_id);
+
+ /* Put back the CPU in enclave cpu pool, if add vcpu error. */
+ if (rc < 0)
+ cpumask_set_cpu(vcpu_id, ne_enclave->cpu_siblings);
+
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+
+ return rc;
+ }
+
default:
return -ENOTTY;
}
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
Add ioctl command logic for enclave VM creation. It triggers a slot
allocation. The enclave resources will be associated with this slot and
it will be used as an identifier for triggering enclave run.
Return a file descriptor, namely enclave fd. This is further used by the
associated user space enclave process to set enclave resources and
trigger enclave termination.
The poll function is implemented in order to notify the enclave process
when an enclave exits without a specific enclave termination command
trigger e.g. when an enclave crashes.
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the WARN_ON calls.
* Update static calls sanity checks.
* Update kzfree() calls to kfree().
* Remove file ops that do nothing for now - open.
v1 -> v2
* Add log pattern for NE.
* Update goto labels to match their purpose.
* Remove the BUG_ON calls.
---
drivers/virt/nitro_enclaves/ne_misc_dev.c | 160 ++++++++++++++++++++++
1 file changed, 160 insertions(+)
diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c b/drivers/virt/nitro_enclaves/ne_misc_dev.c
index 4a6f74cacdb0..8cf1c4ee1812 100644
--- a/drivers/virt/nitro_enclaves/ne_misc_dev.c
+++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c
@@ -49,9 +49,169 @@ struct ne_cpu_pool {
static struct ne_cpu_pool ne_cpu_pool;
+static long ne_enclave_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ switch (cmd) {
+ default:
+ return -ENOTTY;
+ }
+
+ return 0;
+}
+
+static int ne_enclave_release(struct inode *inode, struct file *file)
+{
+ return 0;
+}
+
+static __poll_t ne_enclave_poll(struct file *file, poll_table *wait)
+{
+ __poll_t mask = 0;
+ struct ne_enclave *ne_enclave = file->private_data;
+
+ poll_wait(file, &ne_enclave->eventq, wait);
+
+ if (!ne_enclave->has_event)
+ return mask;
+
+ mask = POLLHUP;
+
+ return mask;
+}
+
+static const struct file_operations ne_enclave_fops = {
+ .owner = THIS_MODULE,
+ .llseek = noop_llseek,
+ .poll = ne_enclave_poll,
+ .unlocked_ioctl = ne_enclave_ioctl,
+ .release = ne_enclave_release,
+};
+
+/**
+ * ne_create_vm_ioctl - Alloc slot to be associated with an enclave. Create
+ * enclave file descriptor to be further used for enclave resources handling
+ * e.g. memory regions and CPUs.
+ *
+ * This function gets called with the ne_pci_dev enclave mutex held.
+ *
+ * @pdev: PCI device used for enclave lifetime management.
+ * @ne_pci_dev: private data associated with the PCI device.
+ * @type: type of the virtual machine to be created.
+ *
+ * @returns: enclave fd on success, negative return value on failure.
+ */
+static int ne_create_vm_ioctl(struct pci_dev *pdev,
+ struct ne_pci_dev *ne_pci_dev, unsigned long type)
+{
+ struct ne_pci_dev_cmd_reply cmd_reply = {};
+ int fd = 0;
+ struct file *file = NULL;
+ struct ne_enclave *ne_enclave = NULL;
+ int rc = -EINVAL;
+ struct slot_alloc_req slot_alloc_req = {};
+
+ ne_enclave = kzalloc(sizeof(*ne_enclave), GFP_KERNEL);
+ if (!ne_enclave)
+ return -ENOMEM;
+
+ if (!zalloc_cpumask_var(&ne_enclave->cpu_siblings, GFP_KERNEL)) {
+ kfree(ne_enclave);
+
+ return -ENOMEM;
+ }
+
+ fd = get_unused_fd_flags(O_CLOEXEC);
+ if (fd < 0) {
+ rc = fd;
+
+ pr_err_ratelimited(NE "Error in getting unused fd [rc=%d]\n",
+ rc);
+
+ goto free_cpumask;
+ }
+
+ file = anon_inode_getfile("ne-vm", &ne_enclave_fops, ne_enclave,
+ O_RDWR);
+ if (IS_ERR(file)) {
+ rc = PTR_ERR(file);
+
+ pr_err_ratelimited(NE "Error in anon inode get file [rc=%d]\n",
+ rc);
+
+ goto put_fd;
+ }
+
+ ne_enclave->pdev = pdev;
+
+ rc = ne_do_request(ne_enclave->pdev, SLOT_ALLOC, &slot_alloc_req,
+ sizeof(slot_alloc_req), &cmd_reply,
+ sizeof(cmd_reply));
+ if (rc < 0) {
+ pr_err_ratelimited(NE "Error in slot alloc [rc=%d]\n", rc);
+
+ goto put_file;
+ }
+
+ init_waitqueue_head(&ne_enclave->eventq);
+ ne_enclave->has_event = false;
+ mutex_init(&ne_enclave->enclave_info_mutex);
+ ne_enclave->max_mem_regions = cmd_reply.mem_regions;
+ INIT_LIST_HEAD(&ne_enclave->mem_regions_list);
+ ne_enclave->mm = current->mm;
+ ne_enclave->slot_uid = cmd_reply.slot_uid;
+ ne_enclave->state = NE_STATE_INIT;
+ INIT_LIST_HEAD(&ne_enclave->vcpu_ids_list);
+
+ list_add(&ne_enclave->enclave_list_entry, &ne_pci_dev->enclaves_list);
+
+ fd_install(fd, file);
+
+ return fd;
+
+put_file:
+ fput(file);
+put_fd:
+ put_unused_fd(fd);
+free_cpumask:
+ free_cpumask_var(ne_enclave->cpu_siblings);
+ kfree(ne_enclave);
+
+ return rc;
+}
+
static long ne_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
+ struct ne_pci_dev *ne_pci_dev = NULL;
+ struct pci_dev *pdev = pci_get_device(PCI_VENDOR_ID_AMAZON,
+ PCI_DEVICE_ID_NE, NULL);
+
+ if (!pdev)
+ return -EINVAL;
+
+ ne_pci_dev = pci_get_drvdata(pdev);
+ if (!ne_pci_dev)
+ return -EINVAL;
+
switch (cmd) {
+ case KVM_CREATE_VM: {
+ int rc = -EINVAL;
+ unsigned long type = 0;
+
+ if (copy_from_user(&type, (void *)arg, sizeof(type))) {
+ pr_err_ratelimited(NE "Error in copy from user\n");
+
+ return -EFAULT;
+ }
+
+ mutex_lock(&ne_pci_dev->enclaves_list_mutex);
+
+ rc = ne_create_vm_ioctl(pdev, ne_pci_dev, type);
+
+ mutex_unlock(&ne_pci_dev->enclaves_list_mutex);
+
+ return rc;
+ }
default:
return -ENOTTY;
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
Another resource that is being set for an enclave is memory. User space
memory regions, that need to be backed by contiguous memory regions,
are associated with the enclave.
One solution for allocating / reserving contiguous memory regions, that
is used for integration, is hugetlbfs. The user space process that is
associated with the enclave passes to the driver these memory regions.
Add ioctl command logic for setting user space memory region for an
enclave.
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the WARN_ON calls.
* Update static calls sanity checks.
* Update kzfree() calls to kfree().
v1 -> v2
* Add log pattern for NE.
* Update goto labels to match their purpose.
* Remove the BUG_ON calls.
* Check if enclave max memory regions is reached when setting an enclave memory
region.
* Check if enclave state is init when setting an enclave memory region.
---
drivers/virt/nitro_enclaves/ne_misc_dev.c | 244 ++++++++++++++++++++++
1 file changed, 244 insertions(+)
diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c b/drivers/virt/nitro_enclaves/ne_misc_dev.c
index 143259847f71..a7b0903d7d28 100644
--- a/drivers/virt/nitro_enclaves/ne_misc_dev.c
+++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c
@@ -193,6 +193,222 @@ static int ne_create_vcpu_ioctl(struct ne_enclave *ne_enclave, u32 vcpu_id)
return rc;
}
+/**
+ * ne_sanity_check_user_mem_region - Sanity check the userspace memory
+ * region received during the set user memory region ioctl call.
+ *
+ * This function gets called with the ne_enclave mutex held.
+ *
+ * @ne_enclave: private data associated with the current enclave.
+ * @mem_region: user space memory region to be sanity checked.
+ *
+ * @returns: 0 on success, negative return value on failure.
+ */
+static int ne_sanity_check_user_mem_region(struct ne_enclave *ne_enclave,
+ struct kvm_userspace_memory_region *mem_region)
+{
+ if ((mem_region->memory_size % MIN_MEM_REGION_SIZE) != 0) {
+ pr_err_ratelimited(NE "Mem size not multiple of 2 MiB\n");
+
+ return -EINVAL;
+ }
+
+ if ((mem_region->userspace_addr & (MIN_MEM_REGION_SIZE - 1)) ||
+ !access_ok((void __user *)(unsigned long)mem_region->userspace_addr,
+ mem_region->memory_size)) {
+ pr_err_ratelimited(NE "Invalid user space addr range\n");
+
+ return -EINVAL;
+ }
+
+ if ((mem_region->guest_phys_addr + mem_region->memory_size) <
+ mem_region->guest_phys_addr) {
+ pr_err_ratelimited(NE "Invalid guest phys addr range\n");
+
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/**
+ * ne_set_user_memory_region_ioctl - Add user space memory region to the slot
+ * associated with the current enclave.
+ *
+ * This function gets called with the ne_enclave mutex held.
+ *
+ * @ne_enclave: private data associated with the current enclave.
+ * @mem_region: user space memory region to be associated with the given slot.
+ *
+ * @returns: 0 on success, negative return value on failure.
+ */
+static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave,
+ struct kvm_userspace_memory_region *mem_region)
+{
+ struct ne_pci_dev_cmd_reply cmd_reply = {};
+ long gup_rc = 0;
+ unsigned long i = 0;
+ struct ne_mem_region *ne_mem_region = NULL;
+ unsigned long nr_phys_contig_mem_regions = 0;
+ unsigned long nr_pinned_pages = 0;
+ struct page **phys_contig_mem_regions = NULL;
+ int rc = -EINVAL;
+ struct slot_add_mem_req slot_add_mem_req = {};
+
+ if (ne_enclave->mm != current->mm)
+ return -EIO;
+
+ rc = ne_sanity_check_user_mem_region(ne_enclave, mem_region);
+ if (rc < 0)
+ return rc;
+
+ ne_mem_region = kzalloc(sizeof(*ne_mem_region), GFP_KERNEL);
+ if (!ne_mem_region)
+ return -ENOMEM;
+
+ /*
+ * TODO: Update nr_pages value to handle contiguous virtual address
+ * ranges mapped to non-contiguous physical regions. Hugetlbfs can give
+ * 2 MiB / 1 GiB contiguous physical regions.
+ */
+ ne_mem_region->nr_pages = mem_region->memory_size / MIN_MEM_REGION_SIZE;
+
+ ne_mem_region->pages = kcalloc(ne_mem_region->nr_pages,
+ sizeof(*ne_mem_region->pages),
+ GFP_KERNEL);
+ if (!ne_mem_region->pages) {
+ kfree(ne_mem_region);
+
+ return -ENOMEM;
+ }
+
+ phys_contig_mem_regions = kcalloc(ne_mem_region->nr_pages,
+ sizeof(*phys_contig_mem_regions),
+ GFP_KERNEL);
+ if (!phys_contig_mem_regions) {
+ kfree(ne_mem_region->pages);
+ kfree(ne_mem_region);
+
+ return -ENOMEM;
+ }
+
+ /*
+ * TODO: Handle non-contiguous memory regions received from user space.
+ * Hugetlbfs can give 2 MiB / 1 GiB contiguous physical regions. The
+ * virtual address space can be seen as contiguous, although it is
+ * mapped underneath to 2 MiB / 1 GiB physical regions e.g. 8 MiB
+ * virtual address space mapped to 4 physically contiguous regions of 2
+ * MiB.
+ */
+ do {
+ unsigned long tmp_nr_pages = ne_mem_region->nr_pages -
+ nr_pinned_pages;
+ struct page **tmp_pages = ne_mem_region->pages +
+ nr_pinned_pages;
+ u64 tmp_userspace_addr = mem_region->userspace_addr +
+ nr_pinned_pages * MIN_MEM_REGION_SIZE;
+
+ gup_rc = get_user_pages(tmp_userspace_addr, tmp_nr_pages,
+ FOLL_GET, tmp_pages, NULL);
+ if (gup_rc < 0) {
+ rc = gup_rc;
+
+ pr_err_ratelimited(NE "Error in gup [rc=%d]\n", rc);
+
+ unpin_user_pages(ne_mem_region->pages, nr_pinned_pages);
+
+ goto free_mem_region;
+ }
+
+ nr_pinned_pages += gup_rc;
+
+ } while (nr_pinned_pages < ne_mem_region->nr_pages);
+
+ /*
+ * TODO: Update checks once physically contiguous regions are collected
+ * based on the user space address and get_user_pages() results.
+ */
+ for (i = 0; i < ne_mem_region->nr_pages; i++) {
+ if (!PageHuge(ne_mem_region->pages[i])) {
+ pr_err_ratelimited(NE "Not a hugetlbfs page\n");
+
+ goto unpin_pages;
+ }
+
+ if (huge_page_size(page_hstate(ne_mem_region->pages[i])) !=
+ MIN_MEM_REGION_SIZE) {
+ pr_err_ratelimited(NE "The page size isn't 2 MiB\n");
+
+ goto unpin_pages;
+ }
+
+ /*
+ * TODO: Update once handled non-contiguous memory regions
+ * received from user space.
+ */
+ phys_contig_mem_regions[i] = ne_mem_region->pages[i];
+ }
+
+ /*
+ * TODO: Update once handled non-contiguous memory regions received
+ * from user space.
+ */
+ nr_phys_contig_mem_regions = ne_mem_region->nr_pages;
+
+ if ((ne_enclave->nr_mem_regions + nr_phys_contig_mem_regions) >
+ ne_enclave->max_mem_regions) {
+ pr_err_ratelimited(NE "Reached max memory regions %lld\n",
+ ne_enclave->max_mem_regions);
+
+ goto unpin_pages;
+ }
+
+ for (i = 0; i < nr_phys_contig_mem_regions; i++) {
+ u64 phys_addr = page_to_phys(phys_contig_mem_regions[i]);
+
+ slot_add_mem_req.slot_uid = ne_enclave->slot_uid;
+ slot_add_mem_req.paddr = phys_addr;
+ /*
+ * TODO: Update memory size of physical contiguous memory
+ * region, in case of non-contiguous memory regions received
+ * from user space.
+ */
+ slot_add_mem_req.size = MIN_MEM_REGION_SIZE;
+
+ rc = ne_do_request(ne_enclave->pdev, SLOT_ADD_MEM,
+ &slot_add_mem_req, sizeof(slot_add_mem_req),
+ &cmd_reply, sizeof(cmd_reply));
+ if (rc < 0) {
+ pr_err_ratelimited(NE "Error in slot add mem [rc=%d]\n",
+ rc);
+
+ /* TODO: Only unpin memory regions not added. */
+ goto unpin_pages;
+ }
+
+ ne_enclave->nr_mem_regions++;
+
+ memset(&slot_add_mem_req, 0, sizeof(slot_add_mem_req));
+ memset(&cmd_reply, 0, sizeof(cmd_reply));
+ }
+
+ list_add(&ne_mem_region->mem_region_list_entry,
+ &ne_enclave->mem_regions_list);
+
+ kfree(phys_contig_mem_regions);
+
+ return 0;
+
+unpin_pages:
+ unpin_user_pages(ne_mem_region->pages, ne_mem_region->nr_pages);
+free_mem_region:
+ kfree(phys_contig_mem_regions);
+ kfree(ne_mem_region->pages);
+ kfree(ne_mem_region);
+
+ return rc;
+}
+
static long ne_enclave_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
@@ -267,6 +483,34 @@ static long ne_enclave_ioctl(struct file *file, unsigned int cmd,
return 0;
}
+ case KVM_SET_USER_MEMORY_REGION: {
+ struct kvm_userspace_memory_region mem_region = {};
+ int rc = -EINVAL;
+
+ if (copy_from_user(&mem_region, (void *)arg,
+ sizeof(mem_region))) {
+ pr_err_ratelimited(NE "Error in copy from user\n");
+
+ return -EFAULT;
+ }
+
+ mutex_lock(&ne_enclave->enclave_info_mutex);
+
+ if (ne_enclave->state != NE_STATE_INIT) {
+ pr_err_ratelimited(NE "Enclave isn't in init state\n");
+
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+
+ return -EINVAL;
+ }
+
+ rc = ne_set_user_memory_region_ioctl(ne_enclave, &mem_region);
+
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+
+ return rc;
+ }
+
default:
return -ENOTTY;
}
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
After all the enclave resources are set, the enclave is ready for
beginning to run.
Add ioctl command logic for starting an enclave after all its resources,
memory regions and CPUs, have been set.
The enclave start information includes the local channel addressing -
vsock CID - and the flags associated with the enclave.
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the WARN_ON calls.
* Update static calls sanity checks.
v1 -> v2
* Add log pattern for NE.
* Check if enclave state is init when starting an enclave.
* Remove the BUG_ON calls.
---
drivers/virt/nitro_enclaves/ne_misc_dev.c | 101 ++++++++++++++++++++++
1 file changed, 101 insertions(+)
diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c b/drivers/virt/nitro_enclaves/ne_misc_dev.c
index a7b0903d7d28..2a8e61d6a260 100644
--- a/drivers/virt/nitro_enclaves/ne_misc_dev.c
+++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c
@@ -409,6 +409,47 @@ static int ne_set_user_memory_region_ioctl(struct ne_enclave *ne_enclave,
return rc;
}
+/**
+ * ne_enclave_start_ioctl - Trigger enclave start after the enclave resources,
+ * such as memory and CPU, have been set.
+ *
+ * This function gets called with the ne_enclave mutex held.
+ *
+ * @ne_enclave: private data associated with the current enclave.
+ * @enclave_start_metadata: enclave metadata that includes enclave cid and
+ * flags and the slot uid.
+ *
+ * @returns: 0 on success, negative return value on failure.
+ */
+static int ne_enclave_start_ioctl(struct ne_enclave *ne_enclave,
+ struct enclave_start_metadata *enclave_start_metadata)
+{
+ struct ne_pci_dev_cmd_reply cmd_reply = {};
+ struct enclave_start_req enclave_start_req = {};
+ int rc = -EINVAL;
+
+ enclave_start_metadata->slot_uid = ne_enclave->slot_uid;
+
+ enclave_start_req.enclave_cid = enclave_start_metadata->enclave_cid;
+ enclave_start_req.flags = enclave_start_metadata->flags;
+ enclave_start_req.slot_uid = enclave_start_metadata->slot_uid;
+
+ rc = ne_do_request(ne_enclave->pdev, ENCLAVE_START, &enclave_start_req,
+ sizeof(enclave_start_req), &cmd_reply,
+ sizeof(cmd_reply));
+ if (rc < 0) {
+ pr_err_ratelimited(NE "Error in enclave start [rc=%d]\n", rc);
+
+ return rc;
+ }
+
+ ne_enclave->state = NE_STATE_RUNNING;
+
+ enclave_start_metadata->enclave_cid = cmd_reply.enclave_cid;
+
+ return 0;
+}
+
static long ne_enclave_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
@@ -511,6 +552,66 @@ static long ne_enclave_ioctl(struct file *file, unsigned int cmd,
return rc;
}
+ case NE_START_ENCLAVE: {
+ struct enclave_start_metadata enclave_start_metadata = {};
+ int rc = -EINVAL;
+
+ if (copy_from_user(&enclave_start_metadata, (void *)arg,
+ sizeof(enclave_start_metadata))) {
+ pr_err_ratelimited(NE "Error in copy from user\n");
+
+ return -EFAULT;
+ }
+
+ mutex_lock(&ne_enclave->enclave_info_mutex);
+
+ if (ne_enclave->state != NE_STATE_INIT) {
+ pr_err_ratelimited(NE "Enclave isn't in init state\n");
+
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+
+ return -EINVAL;
+ }
+
+ if (!ne_enclave->nr_mem_regions) {
+ pr_err_ratelimited(NE "Enclave has no mem regions\n");
+
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+
+ return -EINVAL;
+ }
+
+ if (!ne_enclave->nr_vcpus) {
+ pr_err_ratelimited(NE "Enclave has no vcpus\n");
+
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+
+ return -EINVAL;
+ }
+
+ if (!cpumask_empty(ne_enclave->cpu_siblings)) {
+ pr_err_ratelimited(NE "CPU siblings not used\n");
+
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+
+ return -EINVAL;
+ }
+
+ rc = ne_enclave_start_ioctl(ne_enclave,
+ &enclave_start_metadata);
+
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+
+ if (copy_to_user((void *)arg, &enclave_start_metadata,
+ sizeof(enclave_start_metadata))) {
+ pr_err_ratelimited(NE "Error in copy to user\n");
+
+ return -EFAULT;
+ }
+
+ return rc;
+ }
+
default:
return -ENOTTY;
}
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
Before setting the memory regions for the enclave, the enclave image
needs to be placed in memory. After the memory regions are set, this
memory cannot be used anymore by the VM, being carved out.
Add ioctl command logic to get the offset in enclave memory where to
place the enclave image. Then the user space tooling copies the enclave
image in the memory using the giveni memory offset.
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* No changes.
v1 -> v2
* New in v2.
---
drivers/virt/nitro_enclaves/ne_misc_dev.c | 24 +++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c b/drivers/virt/nitro_enclaves/ne_misc_dev.c
index 6db88e128d1f..143259847f71 100644
--- a/drivers/virt/nitro_enclaves/ne_misc_dev.c
+++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c
@@ -243,6 +243,30 @@ static long ne_enclave_ioctl(struct file *file, unsigned int cmd,
return rc;
}
+ case NE_GET_IMAGE_LOAD_METADATA: {
+ struct image_load_metadata image_load_metadata = {};
+
+ if (copy_from_user(&image_load_metadata, (void *)arg,
+ sizeof(image_load_metadata))) {
+ pr_err_ratelimited(NE "Error in copy from user\n");
+
+ return -EFAULT;
+ }
+
+ /* TODO: Check flags before setting the memory offset. */
+
+ image_load_metadata.memory_offset = NE_IMAGE_LOAD_OFFSET;
+
+ if (copy_to_user((void *)arg, &image_load_metadata,
+ sizeof(image_load_metadata))) {
+ pr_err_ratelimited(NE "Error in copy to user\n");
+
+ return -EFAULT;
+ }
+
+ return 0;
+ }
+
default:
return -ENOTTY;
}
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
An enclave is associated with an fd that is returned after the enclave
creation logic is completed. This enclave fd is further used to setup
enclave resources. Once the enclave needs to be terminated, the enclave
fd is closed.
Add logic for enclave termination, that is mapped to the enclave fd
release callback. Free the internal enclave info used for bookkeeping.
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the WARN_ON calls.
* Update static calls sanity checks.
* Update kzfree() calls to kfree().
v1 -> v2
* Add log pattern for NE.
* Remove the BUG_ON calls.
* Update goto labels to match their purpose.
* Add early exit in release() if there was a slot alloc error in the fd creation
path.
---
drivers/virt/nitro_enclaves/ne_misc_dev.c | 168 ++++++++++++++++++++++
1 file changed, 168 insertions(+)
diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c b/drivers/virt/nitro_enclaves/ne_misc_dev.c
index 2a8e61d6a260..cc0b9927d26f 100644
--- a/drivers/virt/nitro_enclaves/ne_misc_dev.c
+++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c
@@ -619,9 +619,177 @@ static long ne_enclave_ioctl(struct file *file, unsigned int cmd,
return 0;
}
+/**
+ * ne_enclave_remove_all_mem_region_entries - Remove all memory region
+ * entries from the enclave data structure.
+ *
+ * This function gets called with the ne_enclave mutex held.
+ *
+ * @ne_enclave: private data associated with the current enclave.
+ */
+static void ne_enclave_remove_all_mem_region_entries(
+ struct ne_enclave *ne_enclave)
+{
+ struct ne_mem_region *ne_mem_region = NULL;
+ struct ne_mem_region *ne_mem_region_tmp = NULL;
+
+ list_for_each_entry_safe(ne_mem_region, ne_mem_region_tmp,
+ &ne_enclave->mem_regions_list,
+ mem_region_list_entry) {
+ list_del(&ne_mem_region->mem_region_list_entry);
+
+ unpin_user_pages(ne_mem_region->pages,
+ ne_mem_region->nr_pages);
+
+ kfree(ne_mem_region->pages);
+
+ kfree(ne_mem_region);
+ }
+}
+
+/**
+ * ne_enclave_remove_all_vcpu_id_entries - Remove all vCPU id entries
+ * from the enclave data structure.
+ *
+ * This function gets called with the ne_enclave mutex held.
+ *
+ * @ne_enclave: private data associated with the current enclave.
+ */
+static void ne_enclave_remove_all_vcpu_id_entries(struct ne_enclave *ne_enclave)
+{
+ unsigned int cpu = 0;
+ struct ne_vcpu_id *ne_vcpu_id = NULL;
+ struct ne_vcpu_id *ne_vcpu_id_tmp = NULL;
+
+ mutex_lock(&ne_cpu_pool.mutex);
+
+ list_for_each_entry_safe(ne_vcpu_id, ne_vcpu_id_tmp,
+ &ne_enclave->vcpu_ids_list,
+ vcpu_id_list_entry) {
+ list_del(&ne_vcpu_id->vcpu_id_list_entry);
+
+ /* Update the available CPU pool. */
+ cpumask_set_cpu(ne_vcpu_id->vcpu_id, ne_cpu_pool.avail);
+
+ kfree(ne_vcpu_id);
+ }
+
+ /* If any siblings left in the enclave CPU pool, move to available. */
+ for_each_cpu(cpu, ne_enclave->cpu_siblings) {
+ cpumask_clear_cpu(cpu, ne_enclave->cpu_siblings);
+
+ cpumask_set_cpu(cpu, ne_cpu_pool.avail);
+ }
+
+ free_cpumask_var(ne_enclave->cpu_siblings);
+
+ mutex_unlock(&ne_cpu_pool.mutex);
+}
+
+/**
+ * ne_pci_dev_remove_enclave_entry - Remove enclave entry from the data
+ * structure that is part of the PCI device private data.
+ *
+ * This function gets called with the ne_pci_dev enclave mutex held.
+ *
+ * @ne_enclave: private data associated with the current enclave.
+ * @ne_pci_dev: private data associated with the PCI device.
+ */
+static void ne_pci_dev_remove_enclave_entry(struct ne_enclave *ne_enclave,
+ struct ne_pci_dev *ne_pci_dev)
+{
+ struct ne_enclave *ne_enclave_entry = NULL;
+ struct ne_enclave *ne_enclave_entry_tmp = NULL;
+
+ list_for_each_entry_safe(ne_enclave_entry, ne_enclave_entry_tmp,
+ &ne_pci_dev->enclaves_list,
+ enclave_list_entry) {
+ if (ne_enclave_entry->slot_uid == ne_enclave->slot_uid) {
+ list_del(&ne_enclave_entry->enclave_list_entry);
+
+ break;
+ }
+ }
+}
+
static int ne_enclave_release(struct inode *inode, struct file *file)
{
+ struct ne_pci_dev_cmd_reply cmd_reply = {};
+ struct enclave_stop_req enclave_stop_request = {};
+ struct ne_enclave *ne_enclave = file->private_data;
+ struct ne_pci_dev *ne_pci_dev = NULL;
+ int rc = -EINVAL;
+ struct slot_free_req slot_free_req = {};
+
+ if (!ne_enclave)
+ return 0;
+
+ /*
+ * Early exit in case there is an error in the enclave creation logic
+ * and fput() is called on the cleanup path.
+ */
+ if (!ne_enclave->slot_uid)
+ return 0;
+
+ if (!ne_enclave->pdev)
+ return -EINVAL;
+
+ ne_pci_dev = pci_get_drvdata(ne_enclave->pdev);
+ if (!ne_pci_dev)
+ return -EINVAL;
+
+ /*
+ * Acquire the enclave list mutex before the enclave mutex
+ * in order to avoid deadlocks with @ref ne_event_work_handler.
+ */
+ mutex_lock(&ne_pci_dev->enclaves_list_mutex);
+ mutex_lock(&ne_enclave->enclave_info_mutex);
+
+ if (ne_enclave->state != NE_STATE_INIT &&
+ ne_enclave->state != NE_STATE_STOPPED) {
+ enclave_stop_request.slot_uid = ne_enclave->slot_uid;
+
+ rc = ne_do_request(ne_enclave->pdev, ENCLAVE_STOP,
+ &enclave_stop_request,
+ sizeof(enclave_stop_request), &cmd_reply,
+ sizeof(cmd_reply));
+ if (rc < 0) {
+ pr_err_ratelimited(NE "Error in enclave stop [rc=%d]\n",
+ rc);
+
+ goto unlock_mutex;
+ }
+
+ memset(&cmd_reply, 0, sizeof(cmd_reply));
+ }
+
+ slot_free_req.slot_uid = ne_enclave->slot_uid;
+
+ rc = ne_do_request(ne_enclave->pdev, SLOT_FREE, &slot_free_req,
+ sizeof(slot_free_req), &cmd_reply,
+ sizeof(cmd_reply));
+ if (rc < 0) {
+ pr_err_ratelimited(NE "Error in slot free [rc=%d]\n", rc);
+
+ goto unlock_mutex;
+ }
+
+ ne_pci_dev_remove_enclave_entry(ne_enclave, ne_pci_dev);
+ ne_enclave_remove_all_mem_region_entries(ne_enclave);
+ ne_enclave_remove_all_vcpu_id_entries(ne_enclave);
+
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+ mutex_unlock(&ne_pci_dev->enclaves_list_mutex);
+
+ kfree(ne_enclave);
+
return 0;
+
+unlock_mutex:
+ mutex_unlock(&ne_enclave->enclave_info_mutex);
+ mutex_unlock(&ne_pci_dev->enclaves_list_mutex);
+
+ return rc;
}
static __poll_t ne_enclave_poll(struct file *file, poll_table *wait)
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the include directory to use the uapi from the kernel.
* Remove the GPL additional wording as SPDX-License-Identifier is already in
place.
v1 -> v2
* New in v2.
---
samples/nitro_enclaves/.gitignore | 2 +
samples/nitro_enclaves/Makefile | 16 +
samples/nitro_enclaves/ne_ioctl_sample.c | 490 +++++++++++++++++++++++
3 files changed, 508 insertions(+)
create mode 100644 samples/nitro_enclaves/.gitignore
create mode 100644 samples/nitro_enclaves/Makefile
create mode 100644 samples/nitro_enclaves/ne_ioctl_sample.c
diff --git a/samples/nitro_enclaves/.gitignore b/samples/nitro_enclaves/.gitignore
new file mode 100644
index 000000000000..827934129c90
--- /dev/null
+++ b/samples/nitro_enclaves/.gitignore
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+ne_ioctl_sample
diff --git a/samples/nitro_enclaves/Makefile b/samples/nitro_enclaves/Makefile
new file mode 100644
index 000000000000..a3ec78fefb52
--- /dev/null
+++ b/samples/nitro_enclaves/Makefile
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+
+# Enclave lifetime management support for Nitro Enclaves (NE) - ioctl sample
+# usage.
+
+.PHONY: all clean
+
+CFLAGS += -Wall
+
+all:
+ $(CC) $(CFLAGS) -o ne_ioctl_sample ne_ioctl_sample.c -lpthread
+
+clean:
+ rm -f ne_ioctl_sample
diff --git a/samples/nitro_enclaves/ne_ioctl_sample.c b/samples/nitro_enclaves/ne_ioctl_sample.c
new file mode 100644
index 000000000000..ad5595acc012
--- /dev/null
+++ b/samples/nitro_enclaves/ne_ioctl_sample.c
@@ -0,0 +1,490 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+ */
+
+/**
+ * Sample flow of using the ioctl interface provided by the Nitro Enclaves (NE)
+ * kernel driver.
+ *
+ * Usage
+ * -----
+ *
+ * Load the nitro_enclaves module, setting also the enclave CPU pool.
+ *
+ * See the cpu list section from the kernel documentation.
+ * https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html
+ *
+ * insmod drivers/virt/nitro_enclaves/nitro_enclaves.ko ne_cpus=<cpu-list>
+ * lsmod
+ *
+ * Check dmesg for any warnings / errors through the NE driver lifetime / usage.
+ * The NE logs contain the "nitro_enclaves" pattern.
+ *
+ * dmesg
+ *
+ * Check the online / offline CPU list. The CPUs from the pool should be
+ * offlined.
+ *
+ * lscpu
+ *
+ * Setup hugetlbfs huge pages.
+ *
+ * echo <nr_hugepages> > /proc/sys/vm/nr_hugepages
+ *
+ * In this example 256 hugepages of 2 MiB are used.
+ *
+ * Build and run the NE sample.
+ *
+ * make -C samples/nitro_enclaves clean
+ * make -C samples/nitro_enclaves
+ * ./samples/nitro_enclaves/ne_ioctl_sample <path_to_enclave_image>
+ *
+ * Unload the nitro_enclaves module.
+ *
+ * rmmod nitro_enclaves
+ * lsmod
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <limits.h>
+#include <poll.h>
+#include <pthread.h>
+#include <string.h>
+#include <sys/ioctl.h>
+#include <sys/eventfd.h>
+#include <sys/mman.h>
+#include <sys/socket.h>
+#include <sys/types.h>
+#include <unistd.h>
+
+#include <linux/nitro_enclaves.h>
+#include <linux/vm_sockets.h>
+
+/* Nitro Enclaves (NE) misc device that provides the ioctl interface. */
+#define NE_DEV_NAME "/dev/nitro_enclaves"
+
+/* Timeout in seconds / milliseconds for each poll event. */
+#define POLL_WAIT_TIME (60)
+#define POLL_WAIT_TIME_MS (POLL_WAIT_TIME * 1000)
+
+/* Amount of time in seconds for the process to keep the enclave alive. */
+#define SLEEP_TIME (300)
+
+/* Enclave vCPUs metadata. */
+#define DEFAULT_NR_VCPUS (2)
+
+/* Enclave memory metadata */
+/* Min memory size - 2 MiB */
+#define MIN_MEM_REGION_SIZE (2 * 1024 * 1024)
+/* 256 memory regions of 2 MiB */
+#define DEFAULT_NR_MEM_REGIONS (256)
+
+/* Vsock addressing for enclave image loading heartbeat. */
+#define VSOCK_CID (3)
+#define VSOCK_PORT (9000)
+#define HEARTBEAT_VALUE (0xb7)
+
+struct ne_mem_region {
+ void *mem_addr;
+ size_t mem_size;
+};
+
+struct ne_vcpu {
+ int vcpu_fd;
+ unsigned int vcpu_id;
+};
+
+/* Thread function for polling the enclave fd. */
+void *ne_poll_enclave_fd(void *data)
+{
+ int enclave_fd = *(int *)data;
+ struct pollfd fds[1] = {};
+ int i = 0;
+ int rc = 0;
+
+ printf("Running from poll thread, enclave fd %d\n", enclave_fd);
+
+ fds[0].fd = enclave_fd;
+ fds[0].events = POLLIN | POLLERR | POLLHUP;
+
+ /* Keep on polling until the current process is terminated. */
+ while (1) {
+ printf("[iter %d] Polling ...\n", i);
+
+ rc = poll(fds, 1, POLL_WAIT_TIME_MS);
+ if (rc < 0) {
+ printf("Error in poll [%m]\n");
+
+ return NULL;
+ }
+
+ i++;
+
+ if (!rc) {
+ printf("Poll: %d seconds elapsed\n",
+ i * POLL_WAIT_TIME);
+
+ continue;
+ }
+
+ printf("Poll received value %d\n", fds[0].revents);
+ }
+
+ return NULL;
+}
+
+/* Allocate memory region that will be used for the enclave. */
+int ne_alloc_mem_region(struct ne_mem_region *ne_mem_region)
+{
+ if (!ne_mem_region)
+ return -EINVAL;
+
+ if (!ne_mem_region->mem_size)
+ return -EINVAL;
+
+ ne_mem_region->mem_addr = mmap(NULL, ne_mem_region->mem_size,
+ PROT_READ | PROT_WRITE,
+ MAP_PRIVATE | MAP_ANONYMOUS |
+ MAP_HUGETLB, -1, 0);
+ if (ne_mem_region->mem_addr == MAP_FAILED) {
+ printf("Error in mmap memory [%m]\n");
+
+ return -1;
+ }
+
+ return 0;
+}
+
+/* Place enclave image in enclave memory. */
+int ne_load_enclave_image(int enclave_fd, struct ne_mem_region ne_mem_regions[],
+ char enclave_image_path[])
+{
+ struct image_load_metadata image_load_metadata = {};
+ int rc = 0;
+
+ if (enclave_fd < 0)
+ return -EINVAL;
+
+ /* TODO: Set flags based on enclave image type. */
+ image_load_metadata.flags = 0;
+
+ rc = ioctl(enclave_fd, NE_GET_IMAGE_LOAD_METADATA,
+ &image_load_metadata);
+ if (rc < 0) {
+ printf("Error in get image load metadata [rc=%d]\n", rc);
+
+ return rc;
+ }
+
+ printf("Enclave image offset in enclave memory is %lld\n",
+ image_load_metadata.memory_offset);
+
+ /*
+ * TODO: Copy enclave image in enclave memory starting from the given
+ * offset.
+ */
+
+ return 0;
+}
+
+/* Wait for a hearbeat from the enclave to check it has booted. */
+int ne_check_enclave_booted(void)
+{
+ struct sockaddr_vm client_vsock_addr = {};
+ socklen_t client_vsock_len = sizeof(client_vsock_addr);
+ struct pollfd fds[1] = {};
+ int rc = 0;
+ unsigned char recv_buf = 0;
+ struct sockaddr_vm server_vsock_addr = {
+ .svm_family = AF_VSOCK,
+ .svm_cid = VSOCK_CID,
+ .svm_port = VSOCK_PORT,
+ };
+ int server_vsock_fd = 0;
+
+ server_vsock_fd = socket(AF_VSOCK, SOCK_STREAM, 0);
+ if (server_vsock_fd < 0) {
+ rc = server_vsock_fd;
+
+ printf("Error in socket [rc=%d]\n", rc);
+
+ return rc;
+ }
+
+ rc = bind(server_vsock_fd, (struct sockaddr *)&server_vsock_addr,
+ sizeof(server_vsock_addr));
+ if (rc < 0) {
+ printf("Error in bind [rc=%d]\n", rc);
+
+ goto out;
+ }
+
+ rc = listen(server_vsock_fd, 1);
+ if (rc < 0) {
+ printf("Error in listen [rc=%d]\n", rc);
+
+ goto out;
+ }
+
+ fds[0].fd = server_vsock_fd;
+ fds[0].events = POLLIN;
+
+ rc = poll(fds, 1, POLL_WAIT_TIME_MS);
+ if (rc < 0) {
+ printf("Error in poll [%m]\n");
+
+ goto out;
+ }
+
+ if (!rc) {
+ printf("Poll timeout, %d seconds elapsed\n", POLL_WAIT_TIME);
+
+ rc = -ETIMEDOUT;
+
+ goto out;
+ }
+
+ if ((fds[0].revents & POLLIN) == 0) {
+ printf("Poll received value %d\n", fds[0].revents);
+
+ rc = -EINVAL;
+
+ goto out;
+ }
+
+ rc = accept(server_vsock_fd, (struct sockaddr *)&client_vsock_addr,
+ &client_vsock_len);
+ if (rc < 0) {
+ printf("Error in accept [rc=%d]\n", rc);
+
+ goto out;
+ }
+
+ /*
+ * Read the heartbeat value that the init process in the enclave sends
+ * after vsock connect.
+ */
+ rc = read(server_vsock_fd, &recv_buf, sizeof(recv_buf));
+ if (rc < 0) {
+ printf("Error in read [rc=%d]\n", rc);
+
+ goto out;
+ }
+
+ if (rc != sizeof(recv_buf) || recv_buf != HEARTBEAT_VALUE) {
+ printf("Read %d instead of %d\n", recv_buf, HEARTBEAT_VALUE);
+
+ goto out;
+ }
+
+ close(server_vsock_fd);
+
+ return 0;
+
+out:
+ close(server_vsock_fd);
+
+ return rc;
+}
+
+/* Set memory region for the given enclave. */
+int ne_set_mem_region(int enclave_fd, struct ne_mem_region ne_mem_region)
+{
+ struct kvm_userspace_memory_region mem_region = {};
+ int rc = 0;
+
+ if (enclave_fd < 0)
+ return -EINVAL;
+
+ mem_region.slot = 0;
+ mem_region.memory_size = ne_mem_region.mem_size;
+ mem_region.userspace_addr = (__u64)ne_mem_region.mem_addr;
+ mem_region.guest_phys_addr = 0;
+
+ rc = ioctl(enclave_fd, KVM_SET_USER_MEMORY_REGION, &mem_region);
+ if (rc < 0) {
+ printf("Error in set user memory region [rc=%d]\n", rc);
+
+ return rc;
+ }
+
+ return 0;
+}
+
+/* Unmap all the memory regions that were set aside for the enclave. */
+void ne_free_mem_regions(struct ne_mem_region ne_mem_regions[])
+{
+ unsigned int i = 0;
+
+ for (i = 0; i < DEFAULT_NR_MEM_REGIONS; i++)
+ munmap(ne_mem_regions[i].mem_addr, ne_mem_regions[i].mem_size);
+}
+
+/* Create enclave vCPU. */
+int ne_create_vcpu(int enclave_fd, struct ne_vcpu *ne_vcpu)
+{
+ if (enclave_fd < 0)
+ return -EINVAL;
+
+ if (!ne_vcpu)
+ return -EINVAL;
+
+ ne_vcpu->vcpu_fd = ioctl(enclave_fd, KVM_CREATE_VCPU,
+ &ne_vcpu->vcpu_id);
+ if (ne_vcpu->vcpu_fd < 0) {
+ printf("Error in create vcpu [rc=%d]\n", ne_vcpu->vcpu_fd);
+
+ return ne_vcpu->vcpu_fd;
+ }
+
+ return 0;
+}
+
+/* Release enclave vCPU fd(s). */
+void ne_release_vcpus(struct ne_vcpu ne_vcpus[])
+{
+ unsigned int i = 0;
+
+ for (i = 0; i < DEFAULT_NR_VCPUS; i++)
+ if (ne_vcpus[i].vcpu_fd > 0)
+ close(ne_vcpus[i].vcpu_fd);
+}
+
+int main(int argc, char *argv[])
+{
+ int enclave_fd = 0;
+ char enclave_image_path[PATH_MAX] = {};
+ unsigned int i = 0;
+ int ne_dev_fd = 0;
+ struct ne_mem_region ne_mem_regions[DEFAULT_NR_MEM_REGIONS] = {};
+ struct enclave_start_metadata ne_start_metadata = {};
+ struct ne_vcpu ne_vcpus[DEFAULT_NR_VCPUS] = {};
+ int rc = 0;
+ pthread_t thread_id = 0;
+ unsigned long type = 0;
+
+ if (argc != 2) {
+ printf("Usage: %s <path_to_enclave_image>\n", argv[0]);
+
+ exit(EXIT_FAILURE);
+ }
+
+ strncpy(enclave_image_path, argv[1], sizeof(enclave_image_path) - 1);
+
+ ne_dev_fd = open(NE_DEV_NAME, O_RDWR | O_CLOEXEC);
+ if (ne_dev_fd < 0) {
+ printf("Error in open NE device [rc=%d]\n", ne_dev_fd);
+
+ exit(EXIT_FAILURE);
+ }
+
+ printf("Creating enclave slot ...\n");
+
+ enclave_fd = ioctl(ne_dev_fd, KVM_CREATE_VM, &type);
+
+ close(ne_dev_fd);
+
+ if (enclave_fd < 0) {
+ printf("Error in create enclave slot [rc=%d]\n", enclave_fd);
+
+ exit(EXIT_FAILURE);
+ }
+
+ printf("Enclave fd %d\n", enclave_fd);
+
+ rc = pthread_create(&thread_id, NULL, ne_poll_enclave_fd,
+ (void *)&enclave_fd);
+ if (rc < 0) {
+ printf("Error in thread create [rc=%d]\n", rc);
+
+ close(enclave_fd);
+
+ exit(EXIT_FAILURE);
+ }
+
+ for (i = 0; i < DEFAULT_NR_MEM_REGIONS; i++) {
+ ne_mem_regions[i].mem_size = MIN_MEM_REGION_SIZE;
+ rc = ne_alloc_mem_region(&ne_mem_regions[i]);
+ if (rc < 0) {
+ printf("Error in alloc mem region, iter %d [rc=%d]\n",
+ i, rc);
+
+ goto release_enclave_fd;
+ }
+ }
+
+ rc = ne_load_enclave_image(enclave_fd, ne_mem_regions,
+ enclave_image_path);
+ if (rc < 0) {
+ printf("Error in load enclave image [rc=%d]\n", rc);
+
+ goto release_enclave_fd;
+ }
+
+ for (i = 0; i < DEFAULT_NR_MEM_REGIONS; i++) {
+ rc = ne_set_mem_region(enclave_fd, ne_mem_regions[i]);
+ if (rc < 0) {
+ printf("Error in set mem region, iter %d [rc=%d]\n",
+ i, rc);
+
+ goto release_enclave_fd;
+ }
+ }
+
+ printf("Enclave memory regions were added\n");
+
+ for (i = 0; i < DEFAULT_NR_VCPUS; i++) {
+ /*
+ * The vCPU is chosen from the enclave vCPU pool, this value is
+ * not used for now.
+ */
+ ne_vcpus[i].vcpu_id = i;
+ rc = ne_create_vcpu(enclave_fd, &ne_vcpus[i]);
+ if (rc < 0) {
+ printf("Error in create vcpu, iter %d [rc=%d]\n",
+ i, rc);
+
+ goto release_enclave_vcpu_fds;
+ }
+ }
+
+ printf("Enclave vCPUs were created\n");
+
+ rc = ioctl(enclave_fd, NE_START_ENCLAVE, &ne_start_metadata);
+ if (rc < 0) {
+ printf("Error in start enclave [rc=%d]\n", rc);
+
+ goto release_enclave_vcpu_fds;
+ }
+
+ printf("Enclave started, CID %llu\n", ne_start_metadata.enclave_cid);
+
+ /*
+ * TODO: Check for enclave hearbeat after it has started to see if it
+ * has booted.
+ */
+
+ printf("Entering sleep for %d seconds ...\n", SLEEP_TIME);
+
+ sleep(SLEEP_TIME);
+
+ ne_release_vcpus(ne_vcpus);
+
+ close(enclave_fd);
+
+ ne_free_mem_regions(ne_mem_regions);
+
+ exit(EXIT_SUCCESS);
+
+release_enclave_vcpu_fds:
+ ne_release_vcpus(ne_vcpus);
+release_enclave_fd:
+ close(enclave_fd);
+ ne_free_mem_regions(ne_mem_regions);
+
+ exit(EXIT_FAILURE);
+}
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the GPL additional wording as SPDX-License-Identifier is already in
place.
v1 -> v2
* Update path to Kconfig to match the drivers/virt/nitro_enclaves directory.
* Update help in Kconfig.
---
drivers/virt/Kconfig | 2 ++
drivers/virt/nitro_enclaves/Kconfig | 16 ++++++++++++++++
2 files changed, 18 insertions(+)
create mode 100644 drivers/virt/nitro_enclaves/Kconfig
diff --git a/drivers/virt/Kconfig b/drivers/virt/Kconfig
index 363af2eaf2ba..ae82460cdec2 100644
--- a/drivers/virt/Kconfig
+++ b/drivers/virt/Kconfig
@@ -32,4 +32,6 @@ config FSL_HV_MANAGER
partition shuts down.
source "drivers/virt/vboxguest/Kconfig"
+
+source "drivers/virt/nitro_enclaves/Kconfig"
endif
diff --git a/drivers/virt/nitro_enclaves/Kconfig b/drivers/virt/nitro_enclaves/Kconfig
new file mode 100644
index 000000000000..d64725503aff
--- /dev/null
+++ b/drivers/virt/nitro_enclaves/Kconfig
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+
+# Amazon Nitro Enclaves (NE) support.
+# Nitro is a hypervisor that has been developed by Amazon.
+
+config NITRO_ENCLAVES
+ tristate "Nitro Enclaves Support"
+ depends on HOTPLUG_CPU
+ help
+ This driver consists of support for enclave lifetime management
+ for Nitro Enclaves (NE).
+
+ To compile this driver as a module, choose M here.
+ The module will be called nitro_enclaves.
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Remove the GPL additional wording as SPDX-License-Identifier is already in
place.
v1 -> v2
* Update path to Makefile to match the drivers/virt/nitro_enclaves directory.
---
drivers/virt/Makefile | 2 ++
drivers/virt/nitro_enclaves/Makefile | 11 +++++++++++
2 files changed, 13 insertions(+)
create mode 100644 drivers/virt/nitro_enclaves/Makefile
diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile
index fd331247c27a..f28425ce4b39 100644
--- a/drivers/virt/Makefile
+++ b/drivers/virt/Makefile
@@ -5,3 +5,5 @@
obj-$(CONFIG_FSL_HV_MANAGER) += fsl_hypervisor.o
obj-y += vboxguest/
+
+obj-$(CONFIG_NITRO_ENCLAVES) += nitro_enclaves/
diff --git a/drivers/virt/nitro_enclaves/Makefile b/drivers/virt/nitro_enclaves/Makefile
new file mode 100644
index 000000000000..e9f4fcd1591e
--- /dev/null
+++ b/drivers/virt/nitro_enclaves/Makefile
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+
+# Enclave lifetime management support for Nitro Enclaves (NE).
+
+obj-$(CONFIG_NITRO_ENCLAVES) += nitro_enclaves.o
+
+nitro_enclaves-y := ne_pci_dev.o ne_misc_dev.o
+
+ccflags-y += -Wall
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* No changes.
v1 -> v2
* New in v2.
---
Documentation/nitro_enclaves/ne_overview.txt | 86 ++++++++++++++++++++
1 file changed, 86 insertions(+)
create mode 100644 Documentation/nitro_enclaves/ne_overview.txt
diff --git a/Documentation/nitro_enclaves/ne_overview.txt b/Documentation/nitro_enclaves/ne_overview.txt
new file mode 100644
index 000000000000..be8bb3d84132
--- /dev/null
+++ b/Documentation/nitro_enclaves/ne_overview.txt
@@ -0,0 +1,86 @@
+Nitro Enclaves
+==============
+
+Nitro Enclaves (NE) is a new Amazon Elastic Compute Cloud (EC2) capability
+that allows customers to carve out isolated compute environments within EC2
+instances [1].
+
+For example, an application that processes sensitive data and runs in a VM,
+can be separated from other applications running in the same VM. This
+application then runs in a separate VM than the primary VM, namely an enclave.
+
+An enclave runs alongside the VM that spawned it. This setup matches low latency
+applications needs. The resources that are allocated for the enclave, such as
+memory and CPU, are carved out of the primary VM. Each enclave is mapped to a
+process running in the primary VM, that communicates with the NE driver via an
+ioctl interface.
+
+In this sense, there are two components:
+
+1. An enclave abstraction process - a user space process running in the primary
+VM guest that uses the provided ioctl interface of the NE driver to spawn an
+enclave VM (that's 2 below).
+
+How does all gets to an enclave VM running on the host?
+
+There is a NE emulated PCI device exposed to the primary VM. The driver for this
+new PCI device is included in the NE driver.
+
+The ioctl logic is mapped to PCI device commands e.g. the NE_START_ENCLAVE ioctl
+maps to an enclave start PCI command or the KVM_SET_USER_MEMORY_REGION maps to
+an add memory PCI command. The PCI device commands are then translated into
+actions taken on the hypervisor side; that's the Nitro hypervisor running on the
+host where the primary VM is running. The Nitro hypervisor is based on core KVM
+technology.
+
+2. The enclave itself - a VM running on the same host as the primary VM that
+spawned it. Memory and CPUs are carved out of the primary VM and are dedicated
+for the enclave VM. An enclave does not have persistent storage attached.
+
+An enclave communicates with the primary VM via a local communication channel,
+using virtio-vsock [2]. The primary VM has virtio-pci vsock emulated device,
+while the enclave VM has a virtio-mmio vsock emulated device. The vsock device
+uses eventfd for signaling. The enclave VM sees the usual interfaces - local
+APIC and IOAPIC - to get interrupts from virtio-vsock device. The virtio-mmio
+device is placed in memory below the typical 4 GiB.
+
+The application that runs in the enclave needs to be packaged in an enclave
+image together with the OS ( e.g. kernel, ramdisk, init ) that will run in the
+enclave VM. The enclave VM has its own kernel and follows the standard Linux
+boot protocol.
+
+The kernel bzImage, the kernel command line, the ramdisk(s) are part of the
+Enclave Image Format (EIF); plus an EIF header including metadata such as magic
+number, eif version, image size and CRC.
+
+Hash values are computed for the entire enclave image (EIF), the kernel and
+ramdisk(s). That's used, for example, to check that the enclave image that is
+loaded in the enclave VM is the one that was intended to be run.
+
+These crypto measurements are included in a signed attestation document
+generated by the Nitro Hypervisor and further used to prove the identity of the
+enclave; KMS is an example of service that NE is integrated with and that checks
+the attestation doc.
+
+The enclave image (EIF) is loaded in the enclave memory at offset 8 MiB. The
+init process in the enclave connects to the vsock CID of the primary VM and a
+predefined port - 9000 - to send a heartbeat value - 0xb7. This mechanism is
+used to check in the primary VM that the enclave has booted.
+
+If the enclave VM crashes or gracefully exits, an interrupt event is received by
+the NE driver. This event is sent further to the user space enclave process
+running in the primary VM via a poll notification mechanism. Then the user space
+enclave process can exit.
+
+The NE driver for enclave lifetime management provides an ioctl interface to the
+user space. It includes the NE PCI device driver that is the means of
+communication with the hypervisor running on the host where the primary VM and
+the enclave are launched.
+
+The proposed solution is following the KVM model and uses KVM ioctls to be able
+to create and set resources for enclaves. Additional NE ioctl commands, besides
+the ones provided by KVM, are used to start an enclave and get memory offset for
+in-memory enclave image loading.
+
+[1] https://aws.amazon.com/ec2/nitro/nitro-enclaves/
+[2] http://man7.org/linux/man-pages/man7/vsock.7.html
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
Signed-off-by: Andra Paraschiv <[email protected]>
---
Changelog
v2 -> v3
* Update file entries to be in alphabetical order.
v1 -> v2
* No changes.
---
MAINTAINERS | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 50659d76976b..56d529256ba4 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11968,6 +11968,19 @@ S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lftan/nios2.git
F: arch/nios2/
+NITRO ENCLAVES (NE)
+M: Andra Paraschiv <[email protected]>
+M: Alexandru Vasile <[email protected]>
+M: Alexandru Ciobotaru <[email protected]>
+L: [email protected]
+S: Supported
+W: https://aws.amazon.com/ec2/nitro/nitro-enclaves/
+F: Documentation/nitro_enclaves/
+F: drivers/virt/nitro_enclaves/
+F: include/linux/nitro_enclaves.h
+F: include/uapi/linux/nitro_enclaves.h
+F: samples/nitro_enclaves/
+
NOHZ, DYNTICKS SUPPORT
M: Frederic Weisbecker <[email protected]>
M: Thomas Gleixner <[email protected]>
--
2.20.1 (Apple Git-117)
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
On Tue, May 26, 2020 at 01:13:19AM +0300, Andra Paraschiv wrote:
> +/* Nitro Enclaves (NE) misc device */
> +extern struct miscdevice ne_miscdevice;
Why does your misc device need to be in a .h file?
Having the patch series like this (add random .h files, and then start
to use them), is hard to review. Would you want to try to review a
series written in this way?
thanks,
greg k-h
On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
> +#define NE "nitro_enclaves: "
Again, no need for this.
> +#define NE_DEV_NAME "nitro_enclaves"
KBUILD_MODNAME?
> +#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
> +
> +static char *ne_cpus;
> +module_param(ne_cpus, charp, 0644);
> +MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
Again, please do not do this.
Can you get the other amazon.com developers on the cc: list to review
this before you send it out again? I feel like I am doing basic review
of things that should be easily caught by them before you ask the
community to review your code.
And get them to sign off on it too, showing they agree with the design
decisions here :)
thanks,
greg k-h
On Tue, May 26, 2020 at 01:13:18AM +0300, Andra Paraschiv wrote:
> +struct enclave_get_slot_req {
> + /* Context ID (CID) for the enclave vsock device. */
> + u64 enclave_cid;
> +} __attribute__ ((__packed__));
Can you really "pack" a single member structure?
Anyway, we have better ways to specify this instead of the "raw"
__attribute__ option. But first see if you really need any of these, at
first glance, I do not think you do at all, and they can all be removed.
thanks,
greg k-h
On Tue, May 26, 2020 at 01:13:32AM +0300, Andra Paraschiv wrote:
> Signed-off-by: Alexandru Vasile <[email protected]>
> Signed-off-by: Andra Paraschiv <[email protected]>
Again, I can't take patches without any changelog text, and you
shouldn't be writing them, as they are just lazy :)
thanks,
greg k-h
On Tue, May 26, 2020 at 01:13:20AM +0300, Andra Paraschiv wrote:
> The Nitro Enclaves PCI device is used by the kernel driver as a means of
> communication with the hypervisor on the host where the primary VM and
> the enclaves run. It handles requests with regard to enclave lifetime.
>
> Setup the PCI device driver and add support for MSI-X interrupts.
>
> Signed-off-by: Alexandru-Catalin Vasile <[email protected]>
> Signed-off-by: Alexandru Ciobotaru <[email protected]>
> Signed-off-by: Andra Paraschiv <[email protected]>
> ---
> Changelog
>
> v2 -> v3
>
> * Remove the GPL additional wording as SPDX-License-Identifier is already in
> place.
> * Remove the WARN_ON calls.
> * Remove linux/bug include that is not needed.
> * Update static calls sanity checks.
> * Remove "ratelimited" from the logs that are not in the ioctl call paths.
> * Update kzfree() calls to kfree().
>
> v1 -> v2
>
> * Add log pattern for NE.
> * Update PCI device setup functions to receive PCI device data structure and
> then get private data from it inside the functions logic.
> * Remove the BUG_ON calls.
> * Add teardown function for MSI-X setup.
> * Update goto labels to match their purpose.
> * Implement TODO for NE PCI device disable state check.
> * Update function name for NE PCI device probe / remove.
> ---
> drivers/virt/nitro_enclaves/ne_pci_dev.c | 252 +++++++++++++++++++++++
> 1 file changed, 252 insertions(+)
> create mode 100644 drivers/virt/nitro_enclaves/ne_pci_dev.c
>
> diff --git a/drivers/virt/nitro_enclaves/ne_pci_dev.c b/drivers/virt/nitro_enclaves/ne_pci_dev.c
> new file mode 100644
> index 000000000000..0b66166787b6
> --- /dev/null
> +++ b/drivers/virt/nitro_enclaves/ne_pci_dev.c
> @@ -0,0 +1,252 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
> + */
> +
> +/* Nitro Enclaves (NE) PCI device driver. */
> +
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/list.h>
> +#include <linux/mutex.h>
> +#include <linux/module.h>
> +#include <linux/nitro_enclaves.h>
> +#include <linux/pci.h>
> +#include <linux/types.h>
> +#include <linux/wait.h>
> +
> +#include "ne_misc_dev.h"
> +#include "ne_pci_dev.h"
> +
> +#define DEFAULT_TIMEOUT_MSECS (120000) /* 120 sec */
> +
> +#define NE "nitro_enclaves: "
Why is this needed? The dev_* functions should give you all the
information that you need to properly describe the driver and device in
question. No extra "prefixes" should be needed at all.
thanks,
greg k-h
On 26.05.20 14:33, Greg KH wrote:
>
> On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander Graf wrote:
>>
>>
>> On 26.05.20 08:51, Greg KH wrote:
>>>
>>> On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
>>>> +#define NE "nitro_enclaves: "
>>>
>>> Again, no need for this.
>>>
>>>> +#define NE_DEV_NAME "nitro_enclaves"
>>>
>>> KBUILD_MODNAME?
>>>
>>>> +#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
>>>> +
>>>> +static char *ne_cpus;
>>>> +module_param(ne_cpus, charp, 0644);
>>>> +MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
>>>
>>> Again, please do not do this.
>>
>> I actually asked her to put this one in specifically.
>>
>> The concept of this parameter is very similar to isolcpus= and maxcpus= in
>> that it takes CPUs away from Linux and instead donates them to the
>> underlying hypervisor, so that it can spawn enclaves using them.
>>
>> From an admin's point of view, this is a setting I would like to keep
>> persisted across reboots. How would this work with sysfs?
>
> How about just as the "initial" ioctl command to set things up? Don't
> grab any cpu pools until asked to. Otherwise, what happens when you
> load this module on a system that can't support it?
That would give any user with access to the enclave device the ability
to remove CPUs from the system. That's clearly a CAP_ADMIN task in my book.
Hence this whole split: The admin defines the CPU Pool, users can safely
consume this pool to spawn enclaves from it.
So I really don't think an ioctl would be a great user experience. Same
for a sysfs file - although that's probably slightly better than the ioctl.
Other options I can think of:
* sysctl (for modules?)
* module parameter (as implemented here)
* proc file (deprecated FWIW)
The key is the tenant split: Admin sets the pool up, user consumes. This
setup should happen (early) on boot, so that system services can spawn
enclaves.
> module parameters are a major pain, you know this :)
I think in this case it's the least painful option ;). But I'm really
happy to hear about an actually good alternative to it. Right now, I
just can't think of any.
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
On Tue, May 26, 2020 at 02:44:18PM +0200, Alexander Graf wrote:
>
>
> On 26.05.20 14:33, Greg KH wrote:
> >
> > On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander Graf wrote:
> > >
> > >
> > > On 26.05.20 08:51, Greg KH wrote:
> > > >
> > > > On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
> > > > > +#define NE "nitro_enclaves: "
> > > >
> > > > Again, no need for this.
> > > >
> > > > > +#define NE_DEV_NAME "nitro_enclaves"
> > > >
> > > > KBUILD_MODNAME?
> > > >
> > > > > +#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
> > > > > +
> > > > > +static char *ne_cpus;
> > > > > +module_param(ne_cpus, charp, 0644);
> > > > > +MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
> > > >
> > > > Again, please do not do this.
> > >
> > > I actually asked her to put this one in specifically.
> > >
> > > The concept of this parameter is very similar to isolcpus= and maxcpus= in
> > > that it takes CPUs away from Linux and instead donates them to the
> > > underlying hypervisor, so that it can spawn enclaves using them.
> > >
> > > From an admin's point of view, this is a setting I would like to keep
> > > persisted across reboots. How would this work with sysfs?
> >
> > How about just as the "initial" ioctl command to set things up? Don't
> > grab any cpu pools until asked to. Otherwise, what happens when you
> > load this module on a system that can't support it?
>
> That would give any user with access to the enclave device the ability to
> remove CPUs from the system. That's clearly a CAP_ADMIN task in my book.
Ok, what's wrong with that?
> Hence this whole split: The admin defines the CPU Pool, users can safely
> consume this pool to spawn enclaves from it.
But having the admin define that at module load / boot time, is a major
pain. What tools do they have that allow them to do that easily?
> So I really don't think an ioctl would be a great user experience. Same for
> a sysfs file - although that's probably slightly better than the ioctl.
You already are using ioctls to control this thing, right? What's wrong
with "one more"? :)
> Other options I can think of:
>
> * sysctl (for modules?)
Ick.
> * module parameter (as implemented here)
Ick.
> * proc file (deprecated FWIW)
Ick.
> The key is the tenant split: Admin sets the pool up, user consumes. This
> setup should happen (early) on boot, so that system services can spawn
> enclaves.
But it takes more than jus this initial "split up" to set the pool up,
right? Why not make this part of that initial process? What makes this
so special you have to do this at module load time only?
thanks,
greg k-h
On 26.05.20 08:51, Greg KH wrote:
>
> On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
>> +#define NE "nitro_enclaves: "
>
> Again, no need for this.
>
>> +#define NE_DEV_NAME "nitro_enclaves"
>
> KBUILD_MODNAME?
>
>> +#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
>> +
>> +static char *ne_cpus;
>> +module_param(ne_cpus, charp, 0644);
>> +MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
>
> Again, please do not do this.
I actually asked her to put this one in specifically.
The concept of this parameter is very similar to isolcpus= and maxcpus=
in that it takes CPUs away from Linux and instead donates them to the
underlying hypervisor, so that it can spawn enclaves using them.
From an admin's point of view, this is a setting I would like to keep
persisted across reboots. How would this work with sysfs?
> Can you get the other amazon.com developers on the cc: list to review
> this before you send it out again? I feel like I am doing basic review
> of things that should be easily caught by them before you ask the
> community to review your code.
Again, my fault :). We did a good number of internal review rounds, but
I guess I didn't catch the bits you pointed out.
So yes, let's give everyone in CC the change to review v3 properly first
before v4 goes out.
> And get them to sign off on it too, showing they agree with the design
> decisions here :)
I would expect a Reviewed-by tag as a result from the above would
satisfy this? :)
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
On 26.05.20 15:17, Greg KH wrote:
>
> On Tue, May 26, 2020 at 02:44:18PM +0200, Alexander Graf wrote:
>>
>>
>> On 26.05.20 14:33, Greg KH wrote:
>>>
>>> On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander Graf wrote:
>>>>
>>>>
>>>> On 26.05.20 08:51, Greg KH wrote:
>>>>>
>>>>> On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
>>>>>> +#define NE "nitro_enclaves: "
>>>>>
>>>>> Again, no need for this.
>>>>>
>>>>>> +#define NE_DEV_NAME "nitro_enclaves"
>>>>>
>>>>> KBUILD_MODNAME?
>>>>>
>>>>>> +#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
>>>>>> +
>>>>>> +static char *ne_cpus;
>>>>>> +module_param(ne_cpus, charp, 0644);
>>>>>> +MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
>>>>>
>>>>> Again, please do not do this.
>>>>
>>>> I actually asked her to put this one in specifically.
>>>>
>>>> The concept of this parameter is very similar to isolcpus= and maxcpus= in
>>>> that it takes CPUs away from Linux and instead donates them to the
>>>> underlying hypervisor, so that it can spawn enclaves using them.
>>>>
>>>> From an admin's point of view, this is a setting I would like to keep
>>>> persisted across reboots. How would this work with sysfs?
>>>
>>> How about just as the "initial" ioctl command to set things up? Don't
>>> grab any cpu pools until asked to. Otherwise, what happens when you
>>> load this module on a system that can't support it?
>>
>> That would give any user with access to the enclave device the ability to
>> remove CPUs from the system. That's clearly a CAP_ADMIN task in my book.
>
> Ok, what's wrong with that?
Would you want random users to get the ability to hot unplug CPUs from
your system? At unlimited quantity? I don't :).
>
>> Hence this whole split: The admin defines the CPU Pool, users can safely
>> consume this pool to spawn enclaves from it.
>
> But having the admin define that at module load / boot time, is a major
> pain. What tools do they have that allow them to do that easily?
The normal toolbox: editing /etc/default/grub, adding an
/etc/modprobe.d/ file.
When but at module load / boot time would you define it? I really don't
want to have a device node that in theory "the world" can use which then
allows any user on the system to hot unplug every CPU but 0 from my system.
>
>> So I really don't think an ioctl would be a great user experience. Same for
>> a sysfs file - although that's probably slightly better than the ioctl.
>
> You already are using ioctls to control this thing, right? What's wrong
> with "one more"? :)
So what we *could* do is add an ioctl to set the pool size which then
does a CAP_ADMIN check. That however means you now are in priority hell:
A user that wants to spawn an enclave as part of an nginx service would
need to create another service to set the pool size and indicate the
dependency in systemd control files.
Is that really better than a module parameter?
>
>> Other options I can think of:
>>
>> * sysctl (for modules?)
>
> Ick.
>
>> * module parameter (as implemented here)
>
> Ick.
>
>> * proc file (deprecated FWIW)
>
> Ick.
>
>> The key is the tenant split: Admin sets the pool up, user consumes. This
>> setup should happen (early) on boot, so that system services can spawn
>> enclaves.
>
> But it takes more than jus this initial "split up" to set the pool up,
I don't quite follow. The initial "split up" is all it takes. From the
hypervisor's point of view, the physical underlying cores will not be
used to schedule the parent as soon as an enclave is running on them.
Which CPUs are available for enclaves however is purely a parent OS
construct, hence the module parameter.
> right? Why not make this part of that initial process? What makes this
> so special you have to do this at module load time only?
What is the "initial process"? It's really 2 stages: One stage to create
a pool (CAP_ADMIN) which makes sure that some cores become invisible to
the Linux scheduler and one stage to spawn an enclave (normal user
permission) on those pool's CPUs.
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
On 26/05/2020 15:33, Greg KH wrote:
> On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander Graf wrote:
>>
>> On 26.05.20 08:51, Greg KH wrote:
>>> On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
>>>> +#define NE "nitro_enclaves: "
>>> Again, no need for this.
>>>
>>>> +#define NE_DEV_NAME "nitro_enclaves"
>>> KBUILD_MODNAME?
>>>
>>>> +#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
>>>> +
>>>> +static char *ne_cpus;
>>>> +module_param(ne_cpus, charp, 0644);
>>>> +MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
>>> Again, please do not do this.
>> I actually asked her to put this one in specifically.
>>
>> The concept of this parameter is very similar to isolcpus= and maxcpus= in
>> that it takes CPUs away from Linux and instead donates them to the
>> underlying hypervisor, so that it can spawn enclaves using them.
>>
>> From an admin's point of view, this is a setting I would like to keep
>> persisted across reboots. How would this work with sysfs?
> How about just as the "initial" ioctl command to set things up? Don't
> grab any cpu pools until asked to. Otherwise, what happens when you
> load this module on a system that can't support it?
>
> module parameters are a major pain, you know this :)
>
>> So yes, let's give everyone in CC the change to review v3 properly first
>> before v4 goes out.
>>
>>> And get them to sign off on it too, showing they agree with the design
>>> decisions here :)
>> I would expect a Reviewed-by tag as a result from the above would satisfy
>> this? :)
> That would be most appreciated.
With regarding to reviewing, yes, the patch series went through several
rounds before submitting it upstream.
I posted v3 shortly after v2 to have more visibility into the changelog
for each patch in addition to the cover letter changelog. But no major
design changes in there. :)
Thank you.
Andra
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander Graf wrote:
>
>
> On 26.05.20 08:51, Greg KH wrote:
> >
> > On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
> > > +#define NE "nitro_enclaves: "
> >
> > Again, no need for this.
> >
> > > +#define NE_DEV_NAME "nitro_enclaves"
> >
> > KBUILD_MODNAME?
> >
> > > +#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
> > > +
> > > +static char *ne_cpus;
> > > +module_param(ne_cpus, charp, 0644);
> > > +MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
> >
> > Again, please do not do this.
>
> I actually asked her to put this one in specifically.
>
> The concept of this parameter is very similar to isolcpus= and maxcpus= in
> that it takes CPUs away from Linux and instead donates them to the
> underlying hypervisor, so that it can spawn enclaves using them.
>
> From an admin's point of view, this is a setting I would like to keep
> persisted across reboots. How would this work with sysfs?
How about just as the "initial" ioctl command to set things up? Don't
grab any cpu pools until asked to. Otherwise, what happens when you
load this module on a system that can't support it?
module parameters are a major pain, you know this :)
> So yes, let's give everyone in CC the change to review v3 properly first
> before v4 goes out.
>
> > And get them to sign off on it too, showing they agree with the design
> > decisions here :)
>
> I would expect a Reviewed-by tag as a result from the above would satisfy
> this? :)
That would be most appreciated.
thanks,
greg k-h
On 26/05/2020 09:44, Greg KH wrote:
> On Tue, May 26, 2020 at 01:13:18AM +0300, Andra Paraschiv wrote:
>> +struct enclave_get_slot_req {
>> + /* Context ID (CID) for the enclave vsock device. */
>> + u64 enclave_cid;
>> +} __attribute__ ((__packed__));
> Can you really "pack" a single member structure?
>
> Anyway, we have better ways to specify this instead of the "raw"
> __attribute__ option. But first see if you really need any of these, at
> first glance, I do not think you do at all, and they can all be removed.
There are a couple of data structures with more than one member and
multiple field sizes. And for the ones that are not, gathered as
feedback from previous rounds of review that should consider adding a
"flags" field in there for further extensibility.
I can modify to have "__packed" instead of the attribute callout.
Thanks,
Andra
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
On 26/05/2020 09:46, Greg KH wrote:
> On Tue, May 26, 2020 at 01:13:19AM +0300, Andra Paraschiv wrote:
>> +/* Nitro Enclaves (NE) misc device */
>> +extern struct miscdevice ne_miscdevice;
> Why does your misc device need to be in a .h file?
>
> Having the patch series like this (add random .h files, and then start
> to use them), is hard to review. Would you want to try to review a
> series written in this way?
The misc device is registered / unregistered while having the NE PCI
device probe / remove, as a dependency to actually having a PCI device
working to expose a misc device.
The way the codebase is split in files is mainly the ioctl logic / misc
device in one file and the PCI device logic in another file; thus not
have all the codebase in a single big file. Given the misc device
(un)register logic above, the misc device needs to be available to the
PCI device setup logic.
Andra
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
On 26/05/2020 09:48, Greg KH wrote:
> On Tue, May 26, 2020 at 01:13:20AM +0300, Andra Paraschiv wrote:
>> The Nitro Enclaves PCI device is used by the kernel driver as a means of
>> communication with the hypervisor on the host where the primary VM and
>> the enclaves run. It handles requests with regard to enclave lifetime.
>>
>> Setup the PCI device driver and add support for MSI-X interrupts.
>>
>> Signed-off-by: Alexandru-Catalin Vasile <[email protected]>
>> Signed-off-by: Alexandru Ciobotaru <[email protected]>
>> Signed-off-by: Andra Paraschiv <[email protected]>
>> ---
>> Changelog
>>
>> v2 -> v3
>>
>> * Remove the GPL additional wording as SPDX-License-Identifier is already in
>> place.
>> * Remove the WARN_ON calls.
>> * Remove linux/bug include that is not needed.
>> * Update static calls sanity checks.
>> * Remove "ratelimited" from the logs that are not in the ioctl call paths.
>> * Update kzfree() calls to kfree().
>>
>> v1 -> v2
>>
>> * Add log pattern for NE.
>> * Update PCI device setup functions to receive PCI device data structure and
>> then get private data from it inside the functions logic.
>> * Remove the BUG_ON calls.
>> * Add teardown function for MSI-X setup.
>> * Update goto labels to match their purpose.
>> * Implement TODO for NE PCI device disable state check.
>> * Update function name for NE PCI device probe / remove.
>> ---
>> drivers/virt/nitro_enclaves/ne_pci_dev.c | 252 +++++++++++++++++++++++
>> 1 file changed, 252 insertions(+)
>> create mode 100644 drivers/virt/nitro_enclaves/ne_pci_dev.c
>>
>> diff --git a/drivers/virt/nitro_enclaves/ne_pci_dev.c b/drivers/virt/nitro_enclaves/ne_pci_dev.c
>> new file mode 100644
>> index 000000000000..0b66166787b6
>> --- /dev/null
>> +++ b/drivers/virt/nitro_enclaves/ne_pci_dev.c
>> @@ -0,0 +1,252 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
>> + */
>> +
>> +/* Nitro Enclaves (NE) PCI device driver. */
>> +
>> +#include <linux/delay.h>
>> +#include <linux/device.h>
>> +#include <linux/list.h>
>> +#include <linux/mutex.h>
>> +#include <linux/module.h>
>> +#include <linux/nitro_enclaves.h>
>> +#include <linux/pci.h>
>> +#include <linux/types.h>
>> +#include <linux/wait.h>
>> +
>> +#include "ne_misc_dev.h"
>> +#include "ne_pci_dev.h"
>> +
>> +#define DEFAULT_TIMEOUT_MSECS (120000) /* 120 sec */
>> +
>> +#define NE "nitro_enclaves: "
> Why is this needed? The dev_* functions should give you all the
> information that you need to properly describe the driver and device in
> question. No extra "prefixes" should be needed at all.
This was needed to have an identifier for the overall NE logic - PCI
dev, ioctl and misc dev.
The ioctl and misc dev logic has pr_* logs, but I can update them to
dev_* with misc dev, then remove this prefix.
Thank you.
Andra
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
On Tue, May 26, 2020 at 08:01:36PM +0300, Paraschiv, Andra-Irina wrote:
>
>
> On 26/05/2020 09:44, Greg KH wrote:
> > On Tue, May 26, 2020 at 01:13:18AM +0300, Andra Paraschiv wrote:
> > > +struct enclave_get_slot_req {
> > > + /* Context ID (CID) for the enclave vsock device. */
> > > + u64 enclave_cid;
> > > +} __attribute__ ((__packed__));
> > Can you really "pack" a single member structure?
> >
> > Anyway, we have better ways to specify this instead of the "raw"
> > __attribute__ option. But first see if you really need any of these, at
> > first glance, I do not think you do at all, and they can all be removed.
>
> There are a couple of data structures with more than one member and multiple
> field sizes. And for the ones that are not, gathered as feedback from
> previous rounds of review that should consider adding a "flags" field in
> there for further extensibility.
Please do not do that in ioctls. Just create new calls instead of
trying to "extend" existing ones. It's always much easier.
> I can modify to have "__packed" instead of the attribute callout.
Make sure you even need that, as I don't think you do for structures
like the above one, right?
thanks,
greg k-h
On Tue, May 26, 2020 at 03:44:30PM +0200, Alexander Graf wrote:
>
>
> On 26.05.20 15:17, Greg KH wrote:
> >
> > On Tue, May 26, 2020 at 02:44:18PM +0200, Alexander Graf wrote:
> > >
> > >
> > > On 26.05.20 14:33, Greg KH wrote:
> > > >
> > > > On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander Graf wrote:
> > > > >
> > > > >
> > > > > On 26.05.20 08:51, Greg KH wrote:
> > > > > >
> > > > > > On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
> > > > > > > +#define NE "nitro_enclaves: "
> > > > > >
> > > > > > Again, no need for this.
> > > > > >
> > > > > > > +#define NE_DEV_NAME "nitro_enclaves"
> > > > > >
> > > > > > KBUILD_MODNAME?
> > > > > >
> > > > > > > +#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
> > > > > > > +
> > > > > > > +static char *ne_cpus;
> > > > > > > +module_param(ne_cpus, charp, 0644);
> > > > > > > +MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
> > > > > >
> > > > > > Again, please do not do this.
> > > > >
> > > > > I actually asked her to put this one in specifically.
> > > > >
> > > > > The concept of this parameter is very similar to isolcpus= and maxcpus= in
> > > > > that it takes CPUs away from Linux and instead donates them to the
> > > > > underlying hypervisor, so that it can spawn enclaves using them.
> > > > >
> > > > > From an admin's point of view, this is a setting I would like to keep
> > > > > persisted across reboots. How would this work with sysfs?
> > > >
> > > > How about just as the "initial" ioctl command to set things up? Don't
> > > > grab any cpu pools until asked to. Otherwise, what happens when you
> > > > load this module on a system that can't support it?
> > >
> > > That would give any user with access to the enclave device the ability to
> > > remove CPUs from the system. That's clearly a CAP_ADMIN task in my book.
> >
> > Ok, what's wrong with that?
>
> Would you want random users to get the ability to hot unplug CPUs from your
> system? At unlimited quantity? I don't :).
A random user, no, but one with admin rights, why not? They can do that
already today on your system, this isn't new.
> > > Hence this whole split: The admin defines the CPU Pool, users can safely
> > > consume this pool to spawn enclaves from it.
> >
> > But having the admin define that at module load / boot time, is a major
> > pain. What tools do they have that allow them to do that easily?
>
> The normal toolbox: editing /etc/default/grub, adding an /etc/modprobe.d/
> file.
Editing grub files is horrid, come on...
> When but at module load / boot time would you define it? I really don't want
> to have a device node that in theory "the world" can use which then allows
> any user on the system to hot unplug every CPU but 0 from my system.
But you have that already when the PCI device is found, right? What is
the initial interface to the driver? What's wrong with using that?
Or am I really missing something as to how this all fits together with
the different pieces? Seeing the patches as-is doesn't really provide a
good overview, sorry.
> > > So I really don't think an ioctl would be a great user experience. Same for
> > > a sysfs file - although that's probably slightly better than the ioctl.
> >
> > You already are using ioctls to control this thing, right? What's wrong
> > with "one more"? :)
>
> So what we *could* do is add an ioctl to set the pool size which then does a
> CAP_ADMIN check. That however means you now are in priority hell:
>
> A user that wants to spawn an enclave as part of an nginx service would need
> to create another service to set the pool size and indicate the dependency
> in systemd control files.
>
> Is that really better than a module parameter?
module parameters are hard to change, and manage control over who really
is changing them.
thanks,
greg k-h
On Tue, May 26, 2020 at 09:35:33PM +0300, Paraschiv, Andra-Irina wrote:
>
>
> On 26/05/2020 09:48, Greg KH wrote:
> > On Tue, May 26, 2020 at 01:13:20AM +0300, Andra Paraschiv wrote:
> > > The Nitro Enclaves PCI device is used by the kernel driver as a means of
> > > communication with the hypervisor on the host where the primary VM and
> > > the enclaves run. It handles requests with regard to enclave lifetime.
> > >
> > > Setup the PCI device driver and add support for MSI-X interrupts.
> > >
> > > Signed-off-by: Alexandru-Catalin Vasile <[email protected]>
> > > Signed-off-by: Alexandru Ciobotaru <[email protected]>
> > > Signed-off-by: Andra Paraschiv <[email protected]>
> > > ---
> > > Changelog
> > >
> > > v2 -> v3
> > >
> > > * Remove the GPL additional wording as SPDX-License-Identifier is already in
> > > place.
> > > * Remove the WARN_ON calls.
> > > * Remove linux/bug include that is not needed.
> > > * Update static calls sanity checks.
> > > * Remove "ratelimited" from the logs that are not in the ioctl call paths.
> > > * Update kzfree() calls to kfree().
> > >
> > > v1 -> v2
> > >
> > > * Add log pattern for NE.
> > > * Update PCI device setup functions to receive PCI device data structure and
> > > then get private data from it inside the functions logic.
> > > * Remove the BUG_ON calls.
> > > * Add teardown function for MSI-X setup.
> > > * Update goto labels to match their purpose.
> > > * Implement TODO for NE PCI device disable state check.
> > > * Update function name for NE PCI device probe / remove.
> > > ---
> > > drivers/virt/nitro_enclaves/ne_pci_dev.c | 252 +++++++++++++++++++++++
> > > 1 file changed, 252 insertions(+)
> > > create mode 100644 drivers/virt/nitro_enclaves/ne_pci_dev.c
> > >
> > > diff --git a/drivers/virt/nitro_enclaves/ne_pci_dev.c b/drivers/virt/nitro_enclaves/ne_pci_dev.c
> > > new file mode 100644
> > > index 000000000000..0b66166787b6
> > > --- /dev/null
> > > +++ b/drivers/virt/nitro_enclaves/ne_pci_dev.c
> > > @@ -0,0 +1,252 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
> > > + */
> > > +
> > > +/* Nitro Enclaves (NE) PCI device driver. */
> > > +
> > > +#include <linux/delay.h>
> > > +#include <linux/device.h>
> > > +#include <linux/list.h>
> > > +#include <linux/mutex.h>
> > > +#include <linux/module.h>
> > > +#include <linux/nitro_enclaves.h>
> > > +#include <linux/pci.h>
> > > +#include <linux/types.h>
> > > +#include <linux/wait.h>
> > > +
> > > +#include "ne_misc_dev.h"
> > > +#include "ne_pci_dev.h"
> > > +
> > > +#define DEFAULT_TIMEOUT_MSECS (120000) /* 120 sec */
> > > +
> > > +#define NE "nitro_enclaves: "
> > Why is this needed? The dev_* functions should give you all the
> > information that you need to properly describe the driver and device in
> > question. No extra "prefixes" should be needed at all.
>
> This was needed to have an identifier for the overall NE logic - PCI dev,
> ioctl and misc dev.
Why? They are all different "devices", and refer to different
interfaces. Stick to what the dev_* gives you for them. You probably
want to stick with the pci dev for almost all of those anyway.
> The ioctl and misc dev logic has pr_* logs, but I can update them to dev_*
> with misc dev, then remove this prefix.
That would be good, thanks.
greg k-h
On 27.05.20 00:24, Greg KH wrote:
>
> On Tue, May 26, 2020 at 03:44:30PM +0200, Alexander Graf wrote:
>>
>>
>> On 26.05.20 15:17, Greg KH wrote:
>>>
>>> On Tue, May 26, 2020 at 02:44:18PM +0200, Alexander Graf wrote:
>>>>
>>>>
>>>> On 26.05.20 14:33, Greg KH wrote:
>>>>>
>>>>> On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander Graf wrote:
>>>>>>
>>>>>>
>>>>>> On 26.05.20 08:51, Greg KH wrote:
>>>>>>>
>>>>>>> On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
>>>>>>>> +#define NE "nitro_enclaves: "
>>>>>>>
>>>>>>> Again, no need for this.
>>>>>>>
>>>>>>>> +#define NE_DEV_NAME "nitro_enclaves"
>>>>>>>
>>>>>>> KBUILD_MODNAME?
>>>>>>>
>>>>>>>> +#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
>>>>>>>> +
>>>>>>>> +static char *ne_cpus;
>>>>>>>> +module_param(ne_cpus, charp, 0644);
>>>>>>>> +MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
>>>>>>>
>>>>>>> Again, please do not do this.
>>>>>>
>>>>>> I actually asked her to put this one in specifically.
>>>>>>
>>>>>> The concept of this parameter is very similar to isolcpus= and maxcpus= in
>>>>>> that it takes CPUs away from Linux and instead donates them to the
>>>>>> underlying hypervisor, so that it can spawn enclaves using them.
>>>>>>
>>>>>> From an admin's point of view, this is a setting I would like to keep
>>>>>> persisted across reboots. How would this work with sysfs?
>>>>>
>>>>> How about just as the "initial" ioctl command to set things up? Don't
>>>>> grab any cpu pools until asked to. Otherwise, what happens when you
>>>>> load this module on a system that can't support it?
>>>>
>>>> That would give any user with access to the enclave device the ability to
>>>> remove CPUs from the system. That's clearly a CAP_ADMIN task in my book.
>>>
>>> Ok, what's wrong with that?
>>
>> Would you want random users to get the ability to hot unplug CPUs from your
>> system? At unlimited quantity? I don't :).
>
> A random user, no, but one with admin rights, why not? They can do that
> already today on your system, this isn't new.
>
>>>> Hence this whole split: The admin defines the CPU Pool, users can safely
>>>> consume this pool to spawn enclaves from it.
>>>
>>> But having the admin define that at module load / boot time, is a major
>>> pain. What tools do they have that allow them to do that easily?
>>
>> The normal toolbox: editing /etc/default/grub, adding an /etc/modprobe.d/
>> file.
>
> Editing grub files is horrid, come on...
It's not editing grub files, it's editing template config files that
then are used as input for grub config file generation :).
>> When but at module load / boot time would you define it? I really don't want
>> to have a device node that in theory "the world" can use which then allows
>> any user on the system to hot unplug every CPU but 0 from my system.
>
> But you have that already when the PCI device is found, right? What is
> the initial interface to the driver? What's wrong with using that?
>
> Or am I really missing something as to how this all fits together with
> the different pieces? Seeing the patches as-is doesn't really provide a
> good overview, sorry.
Ok, let me walk you through the core donation process.
Imagine you have a parent VM with 8 cores. Every one of those virtual
cores is 1:1 mapped to a physical core.
You enumerate the PCI device, you start working with it. None of that
changes your topology.
You now create an enclave spanning 2 cores. The hypervisor will remove
the 1:1 map for those 2 cores and instead mark them as "free floating"
on the remaining 6 cores. It then uses the 2 freed up cores and creates
a 1:1 map for the enclave's 2 cores
To ensure that we still see a realistic mapping of core topology, we
need to remove those 2 cores from the parent VM's scope of execution.
The way this is done today is by going through CPU offlining.
The first and obvious option would be to offline all respective CPUs
when an enclave gets created. But unprivileged users should be able to
spawn enclaves. So how do I ensure that my unprivileged user doesn't
just offline all of my parent VM's CPUs?
The option implemented here is that we fold this into a two-stage
approach. The admin reserves a "pool" of cores for enclaves to use.
Unprivileged users can then consume cores from that pool, but not more
than those.
That way, unprivileged users have no influence over whether a core is
enabled or not. They can only consume the pool of cores that was
dedicated for enclave use.
It also has the big advantage that you don't have dynamically changing
CPU topology in your system. Long living processes that adjust their
environment to the topology can still do so, without cores getting
pulled out under their feet.
>
>>>> So I really don't think an ioctl would be a great user experience. Same for
>>>> a sysfs file - although that's probably slightly better than the ioctl.
>>>
>>> You already are using ioctls to control this thing, right? What's wrong
>>> with "one more"? :)
>>
>> So what we *could* do is add an ioctl to set the pool size which then does a
>> CAP_ADMIN check. That however means you now are in priority hell:
>>
>> A user that wants to spawn an enclave as part of an nginx service would need
>> to create another service to set the pool size and indicate the dependency
>> in systemd control files.
>>
>> Is that really better than a module parameter?
>
> module parameters are hard to change, and manage control over who really
> is changing them.
What is hard about
$ echo 1-5 > /sys/module/nitro_enclaves/parameters/ne_cpus
I also fail to understand the argument about managing control over who
is changing them. Only someone with CAP_ADMIN can change them, no? I
feel like I'm missing something obvious in your argumentation :).
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
On Thu, May 28, 2020 at 03:01:36PM +0200, Alexander Graf wrote:
>
>
> On 27.05.20 00:24, Greg KH wrote:
> >
> > On Tue, May 26, 2020 at 03:44:30PM +0200, Alexander Graf wrote:
> > >
> > >
> > > On 26.05.20 15:17, Greg KH wrote:
> > > >
> > > > On Tue, May 26, 2020 at 02:44:18PM +0200, Alexander Graf wrote:
> > > > >
> > > > >
> > > > > On 26.05.20 14:33, Greg KH wrote:
> > > > > >
> > > > > > On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander Graf wrote:
> > > > > > >
> > > > > > >
> > > > > > > On 26.05.20 08:51, Greg KH wrote:
> > > > > > > >
> > > > > > > > On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
> > > > > > > > > +#define NE "nitro_enclaves: "
> > > > > > > >
> > > > > > > > Again, no need for this.
> > > > > > > >
> > > > > > > > > +#define NE_DEV_NAME "nitro_enclaves"
> > > > > > > >
> > > > > > > > KBUILD_MODNAME?
> > > > > > > >
> > > > > > > > > +#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
> > > > > > > > > +
> > > > > > > > > +static char *ne_cpus;
> > > > > > > > > +module_param(ne_cpus, charp, 0644);
> > > > > > > > > +MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
> > > > > > > >
> > > > > > > > Again, please do not do this.
> > > > > > >
> > > > > > > I actually asked her to put this one in specifically.
> > > > > > >
> > > > > > > The concept of this parameter is very similar to isolcpus= and maxcpus= in
> > > > > > > that it takes CPUs away from Linux and instead donates them to the
> > > > > > > underlying hypervisor, so that it can spawn enclaves using them.
> > > > > > >
> > > > > > > From an admin's point of view, this is a setting I would like to keep
> > > > > > > persisted across reboots. How would this work with sysfs?
> > > > > >
> > > > > > How about just as the "initial" ioctl command to set things up? Don't
> > > > > > grab any cpu pools until asked to. Otherwise, what happens when you
> > > > > > load this module on a system that can't support it?
> > > > >
> > > > > That would give any user with access to the enclave device the ability to
> > > > > remove CPUs from the system. That's clearly a CAP_ADMIN task in my book.
> > > >
> > > > Ok, what's wrong with that?
> > >
> > > Would you want random users to get the ability to hot unplug CPUs from your
> > > system? At unlimited quantity? I don't :).
> >
> > A random user, no, but one with admin rights, why not? They can do that
> > already today on your system, this isn't new.
> >
> > > > > Hence this whole split: The admin defines the CPU Pool, users can safely
> > > > > consume this pool to spawn enclaves from it.
> > > >
> > > > But having the admin define that at module load / boot time, is a major
> > > > pain. What tools do they have that allow them to do that easily?
> > >
> > > The normal toolbox: editing /etc/default/grub, adding an /etc/modprobe.d/
> > > file.
> >
> > Editing grub files is horrid, come on...
>
> It's not editing grub files, it's editing template config files that then
> are used as input for grub config file generation :).
>
> > > When but at module load / boot time would you define it? I really don't want
> > > to have a device node that in theory "the world" can use which then allows
> > > any user on the system to hot unplug every CPU but 0 from my system.
> >
> > But you have that already when the PCI device is found, right? What is
> > the initial interface to the driver? What's wrong with using that?
> >
> > Or am I really missing something as to how this all fits together with
> > the different pieces? Seeing the patches as-is doesn't really provide a
> > good overview, sorry.
>
> Ok, let me walk you through the core donation process.
>
> Imagine you have a parent VM with 8 cores. Every one of those virtual cores
> is 1:1 mapped to a physical core.
>
> You enumerate the PCI device, you start working with it. None of that
> changes your topology.
>
> You now create an enclave spanning 2 cores. The hypervisor will remove the
> 1:1 map for those 2 cores and instead mark them as "free floating" on the
> remaining 6 cores. It then uses the 2 freed up cores and creates a 1:1 map
> for the enclave's 2 cores
>
> To ensure that we still see a realistic mapping of core topology, we need to
> remove those 2 cores from the parent VM's scope of execution. The way this
> is done today is by going through CPU offlining.
>
> The first and obvious option would be to offline all respective CPUs when an
> enclave gets created. But unprivileged users should be able to spawn
> enclaves. So how do I ensure that my unprivileged user doesn't just offline
> all of my parent VM's CPUs?
>
> The option implemented here is that we fold this into a two-stage approach.
> The admin reserves a "pool" of cores for enclaves to use. Unprivileged users
> can then consume cores from that pool, but not more than those.
>
> That way, unprivileged users have no influence over whether a core is
> enabled or not. They can only consume the pool of cores that was dedicated
> for enclave use.
>
> It also has the big advantage that you don't have dynamically changing CPU
> topology in your system. Long living processes that adjust their environment
> to the topology can still do so, without cores getting pulled out under
> their feet.
Ok, that makes more sense, but:
> > > > > So I really don't think an ioctl would be a great user experience. Same for
> > > > > a sysfs file - although that's probably slightly better than the ioctl.
> > > >
> > > > You already are using ioctls to control this thing, right? What's wrong
> > > > with "one more"? :)
> > >
> > > So what we *could* do is add an ioctl to set the pool size which then does a
> > > CAP_ADMIN check. That however means you now are in priority hell:
> > >
> > > A user that wants to spawn an enclave as part of an nginx service would need
> > > to create another service to set the pool size and indicate the dependency
> > > in systemd control files.
> > >
> > > Is that really better than a module parameter?
> >
> > module parameters are hard to change, and manage control over who really
> > is changing them.
>
> What is hard about
>
> $ echo 1-5 > /sys/module/nitro_enclaves/parameters/ne_cpus
So at runtime, after all is booted and up and going, you just ripped
cores out from under someone's feet? :)
And the code really handles writing to that value while the module is
already loaded and up and running? At a quick glance, it didn't seem
like it would handle that very well as it only is checked at ne_init()
time.
Or am I missing something?
Anyway, yes, if you can dynamically do this at runtime, that's great,
but it feels ackward to me to rely on one configuration thing as a
module parameter, and everything else through the ioctl interface.
Unification would seem to be a good thing, right?
thanks,
greg k-h
On 27/05/2020 01:19, Greg KH wrote:
> On Tue, May 26, 2020 at 09:35:33PM +0300, Paraschiv, Andra-Irina wrote:
>>
>> On 26/05/2020 09:48, Greg KH wrote:
>>> On Tue, May 26, 2020 at 01:13:20AM +0300, Andra Paraschiv wrote:
>>>> The Nitro Enclaves PCI device is used by the kernel driver as a means of
>>>> communication with the hypervisor on the host where the primary VM and
>>>> the enclaves run. It handles requests with regard to enclave lifetime.
>>>>
>>>> Setup the PCI device driver and add support for MSI-X interrupts.
>>>>
>>>> Signed-off-by: Alexandru-Catalin Vasile <[email protected]>
>>>> Signed-off-by: Alexandru Ciobotaru <[email protected]>
>>>> Signed-off-by: Andra Paraschiv <[email protected]>
>>>> ---
>>>> Changelog
>>>>
>>>> v2 -> v3
>>>>
>>>> * Remove the GPL additional wording as SPDX-License-Identifier is already in
>>>> place.
>>>> * Remove the WARN_ON calls.
>>>> * Remove linux/bug include that is not needed.
>>>> * Update static calls sanity checks.
>>>> * Remove "ratelimited" from the logs that are not in the ioctl call paths.
>>>> * Update kzfree() calls to kfree().
>>>>
>>>> v1 -> v2
>>>>
>>>> * Add log pattern for NE.
>>>> * Update PCI device setup functions to receive PCI device data structure and
>>>> then get private data from it inside the functions logic.
>>>> * Remove the BUG_ON calls.
>>>> * Add teardown function for MSI-X setup.
>>>> * Update goto labels to match their purpose.
>>>> * Implement TODO for NE PCI device disable state check.
>>>> * Update function name for NE PCI device probe / remove.
>>>> ---
>>>> drivers/virt/nitro_enclaves/ne_pci_dev.c | 252 +++++++++++++++++++++++
>>>> 1 file changed, 252 insertions(+)
>>>> create mode 100644 drivers/virt/nitro_enclaves/ne_pci_dev.c
>>>>
>>>> diff --git a/drivers/virt/nitro_enclaves/ne_pci_dev.c b/drivers/virt/nitro_enclaves/ne_pci_dev.c
>>>> new file mode 100644
>>>> index 000000000000..0b66166787b6
>>>> --- /dev/null
>>>> +++ b/drivers/virt/nitro_enclaves/ne_pci_dev.c
>>>> @@ -0,0 +1,252 @@
>>>> +// SPDX-License-Identifier: GPL-2.0
>>>> +/*
>>>> + * Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
>>>> + */
>>>> +
>>>> +/* Nitro Enclaves (NE) PCI device driver. */
>>>> +
>>>> +#include <linux/delay.h>
>>>> +#include <linux/device.h>
>>>> +#include <linux/list.h>
>>>> +#include <linux/mutex.h>
>>>> +#include <linux/module.h>
>>>> +#include <linux/nitro_enclaves.h>
>>>> +#include <linux/pci.h>
>>>> +#include <linux/types.h>
>>>> +#include <linux/wait.h>
>>>> +
>>>> +#include "ne_misc_dev.h"
>>>> +#include "ne_pci_dev.h"
>>>> +
>>>> +#define DEFAULT_TIMEOUT_MSECS (120000) /* 120 sec */
>>>> +
>>>> +#define NE "nitro_enclaves: "
>>> Why is this needed? The dev_* functions should give you all the
>>> information that you need to properly describe the driver and device in
>>> question. No extra "prefixes" should be needed at all.
>> This was needed to have an identifier for the overall NE logic - PCI dev,
>> ioctl and misc dev.
> Why? They are all different "devices", and refer to different
> interfaces. Stick to what the dev_* gives you for them. You probably
> want to stick with the pci dev for almost all of those anyway.
>
>> The ioctl and misc dev logic has pr_* logs, but I can update them to dev_*
>> with misc dev, then remove this prefix.
> That would be good, thanks.
That's already in v4, the pr_* logs are now replaced with dev_*.
Thanks,
Andra
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
On 27/05/2020 01:21, Greg KH wrote:
> On Tue, May 26, 2020 at 08:01:36PM +0300, Paraschiv, Andra-Irina wrote:
>>
>> On 26/05/2020 09:44, Greg KH wrote:
>>> On Tue, May 26, 2020 at 01:13:18AM +0300, Andra Paraschiv wrote:
>>>> +struct enclave_get_slot_req {
>>>> + /* Context ID (CID) for the enclave vsock device. */
>>>> + u64 enclave_cid;
>>>> +} __attribute__ ((__packed__));
>>> Can you really "pack" a single member structure?
>>>
>>> Anyway, we have better ways to specify this instead of the "raw"
>>> __attribute__ option. But first see if you really need any of these, at
>>> first glance, I do not think you do at all, and they can all be removed.
>> There are a couple of data structures with more than one member and multiple
>> field sizes. And for the ones that are not, gathered as feedback from
>> previous rounds of review that should consider adding a "flags" field in
>> there for further extensibility.
> Please do not do that in ioctls. Just create new calls instead of
> trying to "extend" existing ones. It's always much easier.
>
>> I can modify to have "__packed" instead of the attribute callout.
> Make sure you even need that, as I don't think you do for structures
> like the above one, right?
For the ones like the above, not, I just customized the usage of "__packed".
Thanks,
Andra
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
On 28/05/2020 16:12, Greg KH wrote:
>
> On Thu, May 28, 2020 at 03:01:36PM +0200, Alexander Graf wrote:
>>
>> On 27.05.20 00:24, Greg KH wrote:
>>> On Tue, May 26, 2020 at 03:44:30PM +0200, Alexander Graf wrote:
>>>>
>>>> On 26.05.20 15:17, Greg KH wrote:
>>>>> On Tue, May 26, 2020 at 02:44:18PM +0200, Alexander Graf wrote:
>>>>>>
>>>>>> On 26.05.20 14:33, Greg KH wrote:
>>>>>>> On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander Graf wrote:
>>>>>>>>
>>>>>>>> On 26.05.20 08:51, Greg KH wrote:
>>>>>>>>> On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
>>>>>>>>>> +#define NE "nitro_enclaves: "
>>>>>>>>> Again, no need for this.
>>>>>>>>>
>>>>>>>>>> +#define NE_DEV_NAME "nitro_enclaves"
>>>>>>>>> KBUILD_MODNAME?
>>>>>>>>>
>>>>>>>>>> +#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
>>>>>>>>>> +
>>>>>>>>>> +static char *ne_cpus;
>>>>>>>>>> +module_param(ne_cpus, charp, 0644);
>>>>>>>>>> +MODULE_PARM_DESC(ne_cpus, "<cpu-list> - CPU pool used for Nitro Enclaves");
>>>>>>>>> Again, please do not do this.
>>>>>>>> I actually asked her to put this one in specifically.
>>>>>>>>
>>>>>>>> The concept of this parameter is very similar to isolcpus= and maxcpus= in
>>>>>>>> that it takes CPUs away from Linux and instead donates them to the
>>>>>>>> underlying hypervisor, so that it can spawn enclaves using them.
>>>>>>>>
>>>>>>>> From an admin's point of view, this is a setting I would like to keep
>>>>>>>> persisted across reboots. How would this work with sysfs?
>>>>>>> How about just as the "initial" ioctl command to set things up? Don't
>>>>>>> grab any cpu pools until asked to. Otherwise, what happens when you
>>>>>>> load this module on a system that can't support it?
>>>>>> That would give any user with access to the enclave device the ability to
>>>>>> remove CPUs from the system. That's clearly a CAP_ADMIN task in my book.
>>>>> Ok, what's wrong with that?
>>>> Would you want random users to get the ability to hot unplug CPUs from your
>>>> system? At unlimited quantity? I don't :).
>>> A random user, no, but one with admin rights, why not? They can do that
>>> already today on your system, this isn't new.
>>>
>>>>>> Hence this whole split: The admin defines the CPU Pool, users can safely
>>>>>> consume this pool to spawn enclaves from it.
>>>>> But having the admin define that at module load / boot time, is a major
>>>>> pain. What tools do they have that allow them to do that easily?
>>>> The normal toolbox: editing /etc/default/grub, adding an /etc/modprobe.d/
>>>> file.
>>> Editing grub files is horrid, come on...
>> It's not editing grub files, it's editing template config files that then
>> are used as input for grub config file generation :).
>>
>>>> When but at module load / boot time would you define it? I really don't want
>>>> to have a device node that in theory "the world" can use which then allows
>>>> any user on the system to hot unplug every CPU but 0 from my system.
>>> But you have that already when the PCI device is found, right? What is
>>> the initial interface to the driver? What's wrong with using that?
>>>
>>> Or am I really missing something as to how this all fits together with
>>> the different pieces? Seeing the patches as-is doesn't really provide a
>>> good overview, sorry.
>> Ok, let me walk you through the core donation process.
>>
>> Imagine you have a parent VM with 8 cores. Every one of those virtual cores
>> is 1:1 mapped to a physical core.
>>
>> You enumerate the PCI device, you start working with it. None of that
>> changes your topology.
>>
>> You now create an enclave spanning 2 cores. The hypervisor will remove the
>> 1:1 map for those 2 cores and instead mark them as "free floating" on the
>> remaining 6 cores. It then uses the 2 freed up cores and creates a 1:1 map
>> for the enclave's 2 cores
>>
>> To ensure that we still see a realistic mapping of core topology, we need to
>> remove those 2 cores from the parent VM's scope of execution. The way this
>> is done today is by going through CPU offlining.
>>
>> The first and obvious option would be to offline all respective CPUs when an
>> enclave gets created. But unprivileged users should be able to spawn
>> enclaves. So how do I ensure that my unprivileged user doesn't just offline
>> all of my parent VM's CPUs?
>>
>> The option implemented here is that we fold this into a two-stage approach.
>> The admin reserves a "pool" of cores for enclaves to use. Unprivileged users
>> can then consume cores from that pool, but not more than those.
>>
>> That way, unprivileged users have no influence over whether a core is
>> enabled or not. They can only consume the pool of cores that was dedicated
>> for enclave use.
>>
>> It also has the big advantage that you don't have dynamically changing CPU
>> topology in your system. Long living processes that adjust their environment
>> to the topology can still do so, without cores getting pulled out under
>> their feet.
> Ok, that makes more sense, but:
>
>>>>>> So I really don't think an ioctl would be a great user experience. Same for
>>>>>> a sysfs file - although that's probably slightly better than the ioctl.
>>>>> You already are using ioctls to control this thing, right? What's wrong
>>>>> with "one more"? :)
>>>> So what we *could* do is add an ioctl to set the pool size which then does a
>>>> CAP_ADMIN check. That however means you now are in priority hell:
>>>>
>>>> A user that wants to spawn an enclave as part of an nginx service would need
>>>> to create another service to set the pool size and indicate the dependency
>>>> in systemd control files.
>>>>
>>>> Is that really better than a module parameter?
>>> module parameters are hard to change, and manage control over who really
>>> is changing them.
>> What is hard about
>>
>> $ echo 1-5 > /sys/module/nitro_enclaves/parameters/ne_cpus
> So at runtime, after all is booted and up and going, you just ripped
> cores out from under someone's feet? :)
>
> And the code really handles writing to that value while the module is
> already loaded and up and running? At a quick glance, it didn't seem
> like it would handle that very well as it only is checked at ne_init()
> time.
>
> Or am I missing something?
It's checked for now at module init, true.
I started with init and it remained as a TODO on my side to adapt the
logic to be able to handle the setup via the sysfs file for the module.
Thanks,
Andra
>
> Anyway, yes, if you can dynamically do this at runtime, that's great,
> but it feels ackward to me to rely on one configuration thing as a
> module parameter, and everything else through the ioctl interface.
> Unification would seem to be a good thing, right?
>
> thanks,
>
> greg k-h
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
On Tue, 2020-05-26 at 08:51 +0200, Greg KH wrote:
>
> And get them to sign off on it too, showing they agree with the design
> decisions here :)
Isn't it generally frowned upon to publish a patch with internal sign-
off's on it already ? Or do you mean for us to publicly sign off once
we have reviewed ?
Cheers,
Ben.
On Tue, 2020-05-26 at 20:01 +0300, Paraschiv, Andra-Irina wrote:
>
> On 26/05/2020 09:44, Greg KH wrote:
> > On Tue, May 26, 2020 at 01:13:18AM +0300, Andra Paraschiv wrote:
> > > +struct enclave_get_slot_req {
> > > + /* Context ID (CID) for the enclave vsock device. */
> > > + u64 enclave_cid;
> > > +} __attribute__ ((__packed__));
> >
> > Can you really "pack" a single member structure?
> >
> > Anyway, we have better ways to specify this instead of the "raw"
> > __attribute__ option. But first see if you really need any of
> > these, at
> > first glance, I do not think you do at all, and they can all be
> > removed.
>
> There are a couple of data structures with more than one member and
> multiple field sizes. And for the ones that are not, gathered as
> feedback from previous rounds of review that should consider adding
> a
> "flags" field in there for further extensibility.
>
> I can modify to have "__packed" instead of the attribute callout.
I tend to prefer designing the protocol so that all the fields are
naturally aligned, which should avoid the need for the attribute. Is
it possible in this case ?
Cheers,
Ben.
On Tue, 2020-05-26 at 14:44 +0200, Alexander Graf wrote:
> So I really don't think an ioctl would be a great user experience. Same
> for a sysfs file - although that's probably slightly better than the ioctl.
What would be wrong with a sysfs file ?
Another way to approach that makes sense from a kernel perspective is
to have the user first offline the CPUs, then "donate" them to the
driver via a sysfs file.
> Other options I can think of:
>
> * sysctl (for modules?)
Why would that be any good ? If anything sysctl's are even more awkward
in my book :)
> * module parameter (as implemented here)
> * proc file (deprecated FWIW)
Yeah no.
> The key is the tenant split: Admin sets the pool up, user consumes. This
> setup should happen (early) on boot, so that system services can spawn
> enclaves.
Right and you can have some init script or udev rule that sets that up
from a sys admin produced config file at boot upon detection of the
enclave PCI device for example.
> > module parameters are a major pain, you know this :)
>
> I think in this case it's the least painful option ;). But I'm really
> happy to hear about an actually good alternative to it. Right now, I
> just can't think of any.
Cheers,
Ben.
On Tue, 2020-05-26 at 21:35 +0300, Paraschiv, Andra-Irina wrote:
> This was needed to have an identifier for the overall NE logic - PCI
> dev, ioctl and misc dev.
>
> The ioctl and misc dev logic has pr_* logs, but I can update them to
> dev_* with misc dev, then remove this prefix.
Or #define pr_fmt, but dev_ is better.
Cheers,
Ben.
On Wed, 2020-05-27 at 00:21 +0200, Greg KH wrote:
> > There are a couple of data structures with more than one member and multiple
> > field sizes. And for the ones that are not, gathered as feedback from
> > previous rounds of review that should consider adding a "flags" field in
> > there for further extensibility.
>
> Please do not do that in ioctls. Just create new calls instead of
> trying to "extend" existing ones. It's always much easier.
>
> > I can modify to have "__packed" instead of the attribute callout.
>
> Make sure you even need that, as I don't think you do for structures
> like the above one, right?
Hrm, my impression (granted I only just started to look at this code)
is that these are protocol messages with the PCI devices, not strictly
just ioctl arguments (though they do get conveyed via such ioctls).
Andra-Irina, did I get that right ? :-)
That said, I still think that by carefully ordering the fields and
using explicit padding, we can avoid the need of the packed attributed.
Cheers,
Ben.
On Wed, 2020-05-27 at 00:24 +0200, Greg KH wrote:
> > Would you want random users to get the ability to hot unplug CPUs from your
> > system? At unlimited quantity? I don't :).
>
> A random user, no, but one with admin rights, why not? They can do that
> already today on your system, this isn't new.
So I agree with you that a module parameter sucks. I disagree on the
ioctl :)
This is the kind of setup task that will probably end up being done by
some shell script at boot time based on some config file. Being able to
echo something in a sysfs file which will parse the standard-format CPU
list using the existing kernel functions to turn that into a cpu_mask
makes a lot more sense than having a binary interface via an ioctl
which will require an extra userspace program for use by the admin for
that one single task.
So sysfs is my preference here.
Another approach would be configfs, which would provide a more flexible
interface to potentially create multiple "CPU sets" that could be given
to different users or classes of users etc... but that might be pushing
it, I don't know.
Cheers,
Ben.
On Thu, 2020-05-28 at 15:12 +0200, Greg KH wrote:
> So at runtime, after all is booted and up and going, you just ripped
> cores out from under someone's feet? :)
>
> And the code really handles writing to that value while the module is
> already loaded and up and running? At a quick glance, it didn't seem
> like it would handle that very well as it only is checked at ne_init()
> time.
>
> Or am I missing something?
>
> Anyway, yes, if you can dynamically do this at runtime, that's great,
> but it feels ackward to me to rely on one configuration thing as a
> module parameter, and everything else through the ioctl interface.
> Unification would seem to be a good thing, right?
I personally still prefer a sysfs file :) I really don't like module
parameters as a way to do such things.
Cheers,
Ben.
On Mon, Jun 01, 2020 at 12:47:12PM +1000, Benjamin Herrenschmidt wrote:
> On Tue, 2020-05-26 at 08:51 +0200, Greg KH wrote:
> >
> > And get them to sign off on it too, showing they agree with the design
> > decisions here :)
>
> Isn't it generally frowned upon to publish a patch with internal sign-
> off's on it already ?
Not at all.
> Or do you mean for us to publicly sign off once we have reviewed ?
Either is fine, as long as you do the public one "quickly" and don't
rely on others to do the review first :)
thanks,
greg k-h
On 01/06/2020 05:55, Benjamin Herrenschmidt wrote:
> On Tue, 2020-05-26 at 21:35 +0300, Paraschiv, Andra-Irina wrote:
>> This was needed to have an identifier for the overall NE logic - PCI
>> dev, ioctl and misc dev.
>>
>> The ioctl and misc dev logic has pr_* logs, but I can update them to
>> dev_* with misc dev, then remove this prefix.
> Or #define pr_fmt, but dev_ is better.
Yep, the codebase now includes dev_* usage overall.
Thanks,
Andra
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
On 01/06/2020 05:59, Benjamin Herrenschmidt wrote:
> On Wed, 2020-05-27 at 00:21 +0200, Greg KH wrote:
>>> There are a couple of data structures with more than one member and multiple
>>> field sizes. And for the ones that are not, gathered as feedback from
>>> previous rounds of review that should consider adding a "flags" field in
>>> there for further extensibility.
>> Please do not do that in ioctls. Just create new calls instead of
>> trying to "extend" existing ones. It's always much easier.
>>
>>> I can modify to have "__packed" instead of the attribute callout.
>> Make sure you even need that, as I don't think you do for structures
>> like the above one, right?
> Hrm, my impression (granted I only just started to look at this code)
> is that these are protocol messages with the PCI devices, not strictly
> just ioctl arguments (though they do get conveyed via such ioctls).
>
> Andra-Irina, did I get that right ? :-)
Correct, these data structures having "__packed" attribute map the
messages (requests / replies) for the communication with the NE PCI device.
The data structures from the ioctl commands are not directly used as
part of the communication with the NE PCI device, but several fields of
them e.g. enclave start flags. Some of the fields from the NE PCI device
data structures e.g. the physical address of a memory region (gpa) are
set by the internal kernel logic.
>
> That said, I still think that by carefully ordering the fields and
> using explicit padding, we can avoid the need of the packed attributed.
Regarding your question in the previous mail from this thread and the
mention above on the same topic, that should be possible. IIRC, there
were 2 data structures remaining with "__packed" attribute.
Thank you, Ben.
Andra
Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.
On 01.06.20 05:04, Benjamin Herrenschmidt wrote:
>
>
> On Thu, 2020-05-28 at 15:12 +0200, Greg KH wrote:
>> So at runtime, after all is booted and up and going, you just ripped
>> cores out from under someone's feet? :)
>>
>> And the code really handles writing to that value while the module is
>> already loaded and up and running? At a quick glance, it didn't seem
>> like it would handle that very well as it only is checked at ne_init()
>> time.
>>
>> Or am I missing something?
>>
>> Anyway, yes, if you can dynamically do this at runtime, that's great,
>> but it feels ackward to me to rely on one configuration thing as a
>> module parameter, and everything else through the ioctl interface.
>> Unification would seem to be a good thing, right?
>
> I personally still prefer a sysfs file :) I really don't like module
> parameters as a way to do such things.
I think we're going in circles :).
A module parameter initialized with module_param_cb gives us a sysfs
file that can also have a default parameter set through easily available
tooling.
The ioctl has two downsides:
1) It relies on an external application
2) The permission check would be strictly limited to CAP_ADMIN, sysfs
files can have different permissions
So I fail to see how a module parameter is *not* giving both of you and
me what we want? Of course only if it implements the callback. It was
missing that and apologize for that oversight.
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879