# Changes since v2 [1]
* s/mbox_lock/mbox_mutex in kdocs (Ben)
* Remove stray comments about deleted flags (Ben)
* Remove flags from CXL_CMD (Ben)
* Rework cxl_mem_enumerate_cmds() to allow more than 2 commands (Ben, Jonathan)
* I misread the spec and this needed more robust handling.
* Remove validate_payload() as it no longer is useful (Ben)
* Remove check that CEL returned reasonable command list (Ben)
* It is easy enough to figure this out elsewhere.
* Enable sane set of commands regardless (Ben)
* Remove now useless cxl_enable_cmd() (Ben)
* Add payload dump debugging regardless of timeout (Dan)
* Extracted to separate RFC patch (Ben)
* Move PCI_DVSEC_HEADER1_LENGTH_MASK back to cxl.h (Jonathan, Bjorn)
* Drop duplicated PCI_EXT_CAP_ID_DVSEC (Jonathan)
* Use PCI_DEVICE_CLASS (Jonathan)
* Create wrapper for kernel mailbox usage (Jonathan)
* Helps with error conditions
* Various cosmetic changes (Jonathan)
* Remove references to removed MUTEX flag (Jonathan)
* Remove KERNEL flag since not used yet (Jonathan)
* Remove payload dumping for debug (Jonathan)
* Show example expansion from macro magic (Jonathan)
---
In addition to the mailing list, please feel free to use #cxl on oftc IRC for
discussion.
---
# Summary
Introduce support for “type-3” memory devices defined in the Compute Express
Link (CXL) 2.0 specification [2]. Specifically, these are the memory devices
defined by section 8.2.8.5 of the CXL 2.0 spec. A reference implementation
emulating these devices has been submitted to the QEMU mailing list [3] and is
available on gitlab [4], but will move to a shared tree on kernel.org after
initial acceptance. “Type-3” is a CXL device that acts as a memory expander for
RAM or Persistent Memory. The device might be interleaved with other CXL devices
in a given physical address range.
In addition to the core functionality of discovering the spec defined registers
and resources, introduce a CXL device model that will be the foundation for
translating CXL capabilities into existing Linux infrastructure for Persistent
Memory and other memory devices. For now, this only includes support for the
management command mailbox the surfacing of type-3 devices. These control
devices fill the role of “DIMMs” / nmemX memory-devices in LIBNVDIMM terms.
## Userspace Interaction
Interaction with the driver and type-3 devices via the CXL drivers is introduced
in this patch series and considered stable ABI. They include
* sysfs - Documentation/ABI/testing/sysfs-bus-cxl
* IOCTL - Documentation/driver-api/cxl/memory-devices.rst
* debugfs - Documentation/ABI/testing/debugfs-debug
Work is in process to add support for CXL interactions to the ndctl project [5]
### Development plans
One of the unique challenges that CXL imposes on the Linux driver model is that
it requires the operating system to perform physical address space management
interleaved across devices and bridges. Whereas LIBNVDIMM handles a list of
established static persistent memory address ranges (for example from the ACPI
NFIT), CXL introduces hotplug and the concept of allocating address space to
instantiate persistent memory ranges. This is similar to PCI in the sense that
the platform establishes the MMIO range for PCI BARs to be allocated, but it is
significantly complicated by the fact that a given device can optionally be
interleaved with other devices and can participate in several interleave-sets at
once. LIBNVDIMM handled something like this with the aliasing between PMEM and
BLOCK-WINDOW mode, but CXL adds flexibility to alias DEVICE MEMORY through up to
10 decoders per device.
All of the above needs to be enabled with respect to PCI hotplug events on
Type-3 memory device which needs hooks to determine if a given device is
contributing to a "System RAM" address range that is unable to be unplugged. In
other words CXL ties PCI hotplug to Memory Hotplug and PCI hotplug needs to be
able to negotiate with memory hotplug. In the medium term the implications of
CXL hotplug vs ACPI SRAT/SLIT/HMAT need to be reconciled. One capability that
seems to be needed is either the dynamic allocation of new memory nodes, or
default initializing extra pgdat instances beyond what is enumerated in ACPI
SRAT to accommodate hot-added CXL memory.
Patches welcome, questions welcome as the development effort on the post v5.12
capabilities proceeds.
## Running in QEMU
The incantation to get CXL support in QEMU [4] is considered unstable at this
time. Future readers of this cover letter should verify if any changes are
needed. For the novice QEMU user, the following can be copy/pasted into a
working QEMU commandline. It is enough to make the simplest topology possible.
The topology would consist of a single memory window, single type3 device,
single root port, and single host bridge.
+-------------+
| CXL PXB |
| |
| +-------+ |<----------+
| |CXL RP | | |
+--+-------+--+ v
| +----------+
| | "window" |
| +----------+
v ^
+-------------+ |
| CXL Type 3 | |
| Device |<----------+
+-------------+
// Memory backend for "window"
-object memory-backend-file,id=cxl-mem1,share,mem-path=cxl-type3,size=512M
// Memory backend for LSA
-object memory-backend-file,id=cxl-mem1-lsa,share,mem-path=cxl-mem1-lsa,size=1K
// Host Bridge
-device pxb-cxl id=cxl.0,bus=pcie.0,bus_nr=52,uid=0 len-window-base=1,window-base[0]=0x4c0000000 memdev[0]=cxl-mem1
// Single root port
-device cxl rp,id=rp0,bus=cxl.0,addr=0.0,chassis=0,slot=0,memdev=cxl-mem1
// Single type3 device
-device cxl-type3,bus=rp0,memdev=cxl-mem1,id=cxl-pmem0,size=256M -device cxl-type3,bus=rp1,memdev=cxl-mem1,id=cxl-pmem1,size=256M,lsa=cxl-mem1-lsa
---
[1]: https://lore.kernel.org/linux-cxl/[email protected]/
[2]: https://www.computeexpresslink.org/](https://www.computeexpresslink.org/)
[3]: https://lore.kernel.org/qemu-devel/[email protected]/
[4]: https://gitlab.com/bwidawsk/qemu/-/tree/cxl-2.0v4
[5]: https://github.com/pmem/ndctl/tree/cxl-2.0v2
Ben Widawsky (7):
cxl/mem: Find device capabilities
cxl/mem: Add basic IOCTL interface
cxl/mem: Add a "RAW" send command
cxl/mem: Enable commands via CEL
cxl/mem: Add set of informational commands
MAINTAINERS: Add maintainers of the CXL driver
cxl/mem: Add payload dumping for debug
Dan Williams (2):
cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints
cxl/mem: Register CXL memX devices
.clang-format | 1 +
Documentation/ABI/testing/sysfs-bus-cxl | 26 +
Documentation/driver-api/cxl/index.rst | 12 +
.../driver-api/cxl/memory-devices.rst | 46 +
Documentation/driver-api/index.rst | 1 +
.../userspace-api/ioctl/ioctl-number.rst | 1 +
MAINTAINERS | 11 +
drivers/Kconfig | 1 +
drivers/Makefile | 1 +
drivers/cxl/Kconfig | 66 +
drivers/cxl/Makefile | 7 +
drivers/cxl/bus.c | 29 +
drivers/cxl/cxl.h | 93 +
drivers/cxl/mem.c | 1531 +++++++++++++++++
drivers/cxl/pci.h | 31 +
include/linux/pci_ids.h | 1 +
include/uapi/linux/cxl_mem.h | 170 ++
17 files changed, 2028 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-bus-cxl
create mode 100644 Documentation/driver-api/cxl/index.rst
create mode 100644 Documentation/driver-api/cxl/memory-devices.rst
create mode 100644 drivers/cxl/Kconfig
create mode 100644 drivers/cxl/Makefile
create mode 100644 drivers/cxl/bus.c
create mode 100644 drivers/cxl/cxl.h
create mode 100644 drivers/cxl/mem.c
create mode 100644 drivers/cxl/pci.h
create mode 100644 include/uapi/linux/cxl_mem.h
---
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Bjorn Helgaas <[email protected]>
Cc: Chris Browy <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Ira Weiny <[email protected]>
Cc: Jon Masters <[email protected]>
Cc: Jonathan Cameron <[email protected]>
Cc: Rafael Wysocki <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Vishal Verma <[email protected]>
Cc: "John Groves (jgroves)" <[email protected]>
Cc: "Kelley, Sean V" <[email protected]>
--
2.30.0
The CXL memory device send interface will have a number of supported
commands. The raw command is not such a command. Raw commands allow
userspace to send a specified opcode to the underlying hardware and
bypass all driver checks on the command. The primary use for this
command is to [begrudgingly] allow undocumented vendor specific hardware
commands.
While not the main motivation, it also allows prototyping new hardware
commands without a driver patch and rebuild.
While this all sounds very powerful it comes with a couple of caveats:
1. Bug reports using raw commands will not get the same level of
attention as bug reports using supported commands (via taint).
2. Supported commands will be rejected by the RAW command.
With this comes new debugfs knob to allow full access to your toes with
your weapon of choice.
Cc: Ariel Sibley <[email protected]>
Signed-off-by: Ben Widawsky <[email protected]>
Reviewed-by: Dan Williams <[email protected]> (v2)
---
drivers/cxl/Kconfig | 18 ++++++
drivers/cxl/mem.c | 121 +++++++++++++++++++++++++++++++++++
include/uapi/linux/cxl_mem.h | 12 +++-
3 files changed, 150 insertions(+), 1 deletion(-)
diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
index 9e80b311e928..97dc4d751651 100644
--- a/drivers/cxl/Kconfig
+++ b/drivers/cxl/Kconfig
@@ -32,4 +32,22 @@ config CXL_MEM
Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification.
If unsure say 'm'.
+
+config CXL_MEM_RAW_COMMANDS
+ bool "RAW Command Interface for Memory Devices"
+ depends on CXL_MEM
+ help
+ Enable CXL RAW command interface.
+
+ The CXL driver ioctl interface may assign a kernel ioctl command
+ number for each specification defined opcode. At any given point in
+ time the number of opcodes that the specification defines and a device
+ may implement may exceed the kernel's set of associated ioctl function
+ numbers. The mismatch is either by omission, specification is too new,
+ or by design. When prototyping new hardware, or developing / debugging
+ the driver it is useful to be able to submit any possible command to
+ the hardware, even commands that may crash the kernel due to their
+ potential impact to memory currently in use by the kernel.
+
+ If developing CXL hardware or the driver say Y, otherwise say N.
endif
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index d764a35afea9..a819f090ffe2 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
#include <uapi/linux/cxl_mem.h>
+#include <linux/security.h>
+#include <linux/debugfs.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/cdev.h>
@@ -41,7 +43,14 @@
enum opcode {
CXL_MBOX_OP_INVALID = 0x0000,
+ CXL_MBOX_OP_RAW = CXL_MBOX_OP_INVALID,
+ CXL_MBOX_OP_ACTIVATE_FW = 0x0202,
CXL_MBOX_OP_IDENTIFY = 0x4000,
+ CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101,
+ CXL_MBOX_OP_SET_LSA = 0x4103,
+ CXL_MBOX_OP_SET_SHUTDOWN_STATE = 0x4204,
+ CXL_MBOX_OP_SCAN_MEDIA = 0x4304,
+ CXL_MBOX_OP_GET_SCAN_MEDIA = 0x4305,
CXL_MBOX_OP_MAX = 0x10000
};
@@ -91,6 +100,8 @@ struct cxl_memdev {
static int cxl_mem_major;
static DEFINE_IDA(cxl_memdev_ida);
+static struct dentry *cxl_debugfs;
+static bool cxl_raw_allow_all;
/**
* struct cxl_mem_command - Driver representation of a memory device command
@@ -127,6 +138,49 @@ struct cxl_mem_command {
*/
static struct cxl_mem_command mem_commands[] = {
CXL_CMD(IDENTIFY, 0, 0x43),
+#ifdef CONFIG_CXL_MEM_RAW_COMMANDS
+ CXL_CMD(RAW, ~0, ~0),
+#endif
+};
+
+/*
+ * Commands that RAW doesn't permit. The rationale for each:
+ *
+ * CXL_MBOX_OP_ACTIVATE_FW: Firmware activation requires adjustment /
+ * coordination of transaction timeout values at the root bridge level.
+ *
+ * CXL_MBOX_OP_SET_PARTITION_INFO: The device memory map may change live
+ * and needs to be coordinated with HDM updates.
+ *
+ * CXL_MBOX_OP_SET_LSA: The label storage area may be cached by the
+ * driver and any writes from userspace invalidates those contents.
+ *
+ * CXL_MBOX_OP_SET_SHUTDOWN_STATE: Set shutdown state assumes no writes
+ * to the device after it is marked clean, userspace can not make that
+ * assertion.
+ *
+ * CXL_MBOX_OP_[GET_]SCAN_MEDIA: The kernel provides a native error list that
+ * is kept up to date with patrol notifications and error management.
+ */
+static u16 cxl_disabled_raw_commands[] = {
+ CXL_MBOX_OP_ACTIVATE_FW,
+ CXL_MBOX_OP_SET_PARTITION_INFO,
+ CXL_MBOX_OP_SET_LSA,
+ CXL_MBOX_OP_SET_SHUTDOWN_STATE,
+ CXL_MBOX_OP_SCAN_MEDIA,
+ CXL_MBOX_OP_GET_SCAN_MEDIA,
+};
+
+/*
+ * Command sets that RAW doesn't permit. All opcodes in this set are
+ * disabled because they pass plain text security payloads over the
+ * user/kernel boundary. This functionality is intended to be wrapped
+ * behind the keys ABI which allows for encrypted payloads in the UAPI
+ */
+static u8 security_command_sets[] = {
+ 0x44, /* Sanitize */
+ 0x45, /* Persistent Memory Data-at-rest Security */
+ 0x46, /* Security Passthrough */
};
#define cxl_for_each_cmd(cmd) \
@@ -157,6 +211,16 @@ static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
return 0;
}
+static bool cxl_is_security_command(u16 opcode)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(security_command_sets); i++)
+ if (security_command_sets[i] == (opcode >> 8))
+ return true;
+ return false;
+}
+
static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
struct mbox_cmd *mbox_cmd)
{
@@ -427,6 +491,9 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
cxl_command_names[cmd->info.id].name, mbox_cmd.opcode,
cmd->info.size_in);
+ dev_WARN_ONCE(dev, cmd->info.id == CXL_MEM_COMMAND_ID_RAW,
+ "raw command path used\n");
+
rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
cxl_mem_mbox_put(cxlm);
if (rc)
@@ -457,6 +524,29 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
return rc;
}
+static bool cxl_mem_raw_command_allowed(u16 opcode)
+{
+ int i;
+
+ if (!IS_ENABLED(CONFIG_CXL_MEM_RAW_COMMANDS))
+ return false;
+
+ if (security_locked_down(LOCKDOWN_NONE))
+ return false;
+
+ if (cxl_raw_allow_all)
+ return true;
+
+ if (cxl_is_security_command(opcode))
+ return false;
+
+ for (i = 0; i < ARRAY_SIZE(cxl_disabled_raw_commands); i++)
+ if (cxl_disabled_raw_commands[i] == opcode)
+ return false;
+
+ return true;
+}
+
/**
* cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND.
* @cxlm: &struct cxl_mem device whose mailbox will be used.
@@ -468,6 +558,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd,
* * %-ENOTTY - Invalid command specified.
* * %-EINVAL - Reserved fields or invalid values were used.
* * %-ENOMEM - Input or output buffer wasn't sized properly.
+ * * %-EPERM - Attempted to use a protected command.
*
* The result of this command is a fully validated command in @out_cmd that is
* safe to send to the hardware.
@@ -492,6 +583,29 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm,
if (send_cmd->in.size > cxlm->payload_size)
return -EINVAL;
+ /* Checks are bypassed for raw commands but along comes the taint! */
+ if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) {
+ const struct cxl_mem_command temp = {
+ .info = {
+ .id = CXL_MEM_COMMAND_ID_RAW,
+ .flags = 0,
+ .size_in = send_cmd->in.size,
+ .size_out = send_cmd->out.size,
+ },
+ .opcode = send_cmd->raw.opcode
+ };
+
+ if (send_cmd->raw.rsvd)
+ return -EINVAL;
+
+ if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode))
+ return -EPERM;
+
+ memcpy(out_cmd, &temp, sizeof(temp));
+
+ return 0;
+ }
+
if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK)
return -EINVAL;
@@ -1144,6 +1258,7 @@ static struct pci_driver cxl_mem_driver = {
static __init int cxl_mem_init(void)
{
+ struct dentry *mbox_debugfs;
dev_t devt;
int rc;
@@ -1160,11 +1275,17 @@ static __init int cxl_mem_init(void)
return rc;
}
+ cxl_debugfs = debugfs_create_dir("cxl", NULL);
+ mbox_debugfs = debugfs_create_dir("mbox", cxl_debugfs);
+ debugfs_create_bool("cxl_raw_allow_all", 0600, mbox_debugfs,
+ &cxl_raw_allow_all);
+
return 0;
}
static __exit void cxl_mem_exit(void)
{
+ debugfs_remove_recursive(cxl_debugfs);
pci_unregister_driver(&cxl_mem_driver);
unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS);
}
diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h
index 18cea908ad0b..8eb669150ecb 100644
--- a/include/uapi/linux/cxl_mem.h
+++ b/include/uapi/linux/cxl_mem.h
@@ -22,6 +22,7 @@
#define CXL_CMDS \
___C(INVALID, "Invalid Command"), \
___C(IDENTIFY, "Identify Command"), \
+ ___C(RAW, "Raw device command"), \
___C(MAX, "invalid / last command")
#define ___C(a, b) CXL_MEM_COMMAND_ID_##a
@@ -115,6 +116,9 @@ struct cxl_mem_query_commands {
* @id: The command to send to the memory device. This must be one of the
* commands returned by the query command.
* @flags: Flags for the command (input).
+ * @raw: Special fields for raw commands
+ * @raw.opcode: Opcode passed to hardware when using the RAW command.
+ * @raw.rsvd: Must be zero.
* @rsvd: Must be zero.
* @retval: Return value from the memory device (output).
* @in.size: Size of the payload to provide to the device (input).
@@ -135,7 +139,13 @@ struct cxl_mem_query_commands {
struct cxl_send_command {
__u32 id;
__u32 flags;
- __u32 rsvd;
+ union {
+ struct {
+ __u16 opcode;
+ __u16 rsvd;
+ } raw;
+ __u32 rsvd;
+ };
__u32 retval;
struct {
--
2.30.0
From: Dan Williams <[email protected]>
The CXL.mem protocol allows a device to act as a provider of "System
RAM" and/or "Persistent Memory" that is fully coherent as if the memory
was attached to the typical CPU memory controller.
With the CXL-2.0 specification a PCI endpoint can implement a "Type-3"
device interface and give the operating system control over "Host
Managed Device Memory". See section 2.3 Type 3 CXL Device.
The memory range exported by the device may optionally be described by
the platform firmware memory map, or by infrastructure like LIBNVDIMM to
provision persistent memory capacity from one, or more, CXL.mem devices.
A pre-requisite for Linux-managed memory-capacity provisioning is this
cxl_mem driver that can speak the mailbox protocol defined in section
8.2.8.4 Mailbox Registers.
For now just land the initial driver boiler-plate and Documentation/
infrastructure.
Link: https://www.computeexpresslink.org/download-the-specification
Cc: Jonathan Corbet <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Ben Widawsky <[email protected]>
Acked-by: David Rientjes <[email protected]> (v1)
Reviewed-by: Jonathan Cameron <[email protected]>
---
Documentation/driver-api/cxl/index.rst | 12 ++++
.../driver-api/cxl/memory-devices.rst | 29 +++++++++
Documentation/driver-api/index.rst | 1 +
drivers/Kconfig | 1 +
drivers/Makefile | 1 +
drivers/cxl/Kconfig | 35 +++++++++++
drivers/cxl/Makefile | 4 ++
drivers/cxl/mem.c | 62 +++++++++++++++++++
drivers/cxl/pci.h | 17 +++++
include/linux/pci_ids.h | 1 +
10 files changed, 163 insertions(+)
create mode 100644 Documentation/driver-api/cxl/index.rst
create mode 100644 Documentation/driver-api/cxl/memory-devices.rst
create mode 100644 drivers/cxl/Kconfig
create mode 100644 drivers/cxl/Makefile
create mode 100644 drivers/cxl/mem.c
create mode 100644 drivers/cxl/pci.h
diff --git a/Documentation/driver-api/cxl/index.rst b/Documentation/driver-api/cxl/index.rst
new file mode 100644
index 000000000000..036e49553542
--- /dev/null
+++ b/Documentation/driver-api/cxl/index.rst
@@ -0,0 +1,12 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+====================
+Compute Express Link
+====================
+
+.. toctree::
+ :maxdepth: 1
+
+ memory-devices
+
+.. only:: subproject and html
diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
new file mode 100644
index 000000000000..43177e700d62
--- /dev/null
+++ b/Documentation/driver-api/cxl/memory-devices.rst
@@ -0,0 +1,29 @@
+.. SPDX-License-Identifier: GPL-2.0
+.. include:: <isonum.txt>
+
+===================================
+Compute Express Link Memory Devices
+===================================
+
+A Compute Express Link Memory Device is a CXL component that implements the
+CXL.mem protocol. It contains some amount of volatile memory, persistent memory,
+or both. It is enumerated as a PCI device for configuration and passing
+messages over an MMIO mailbox. Its contribution to the System Physical
+Address space is handled via HDM (Host Managed Device Memory) decoders
+that optionally define a device's contribution to an interleaved address
+range across multiple devices underneath a host-bridge or interleaved
+across host-bridges.
+
+Driver Infrastructure
+=====================
+
+This section covers the driver infrastructure for a CXL memory device.
+
+CXL Memory Device
+-----------------
+
+.. kernel-doc:: drivers/cxl/mem.c
+ :doc: cxl mem
+
+.. kernel-doc:: drivers/cxl/mem.c
+ :internal:
diff --git a/Documentation/driver-api/index.rst b/Documentation/driver-api/index.rst
index 2456d0a97ed8..d246a18fd78f 100644
--- a/Documentation/driver-api/index.rst
+++ b/Documentation/driver-api/index.rst
@@ -35,6 +35,7 @@ available subsections can be seen below.
usb/index
firewire
pci/index
+ cxl/index
spi
i2c
ipmb
diff --git a/drivers/Kconfig b/drivers/Kconfig
index dcecc9f6e33f..62c753a73651 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -6,6 +6,7 @@ menu "Device Drivers"
source "drivers/amba/Kconfig"
source "drivers/eisa/Kconfig"
source "drivers/pci/Kconfig"
+source "drivers/cxl/Kconfig"
source "drivers/pcmcia/Kconfig"
source "drivers/rapidio/Kconfig"
diff --git a/drivers/Makefile b/drivers/Makefile
index fd11b9ac4cc3..678ea810410f 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -73,6 +73,7 @@ obj-$(CONFIG_NVM) += lightnvm/
obj-y += base/ block/ misc/ mfd/ nfc/
obj-$(CONFIG_LIBNVDIMM) += nvdimm/
obj-$(CONFIG_DAX) += dax/
+obj-$(CONFIG_CXL_BUS) += cxl/
obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/
obj-$(CONFIG_NUBUS) += nubus/
obj-y += macintosh/
diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
new file mode 100644
index 000000000000..9e80b311e928
--- /dev/null
+++ b/drivers/cxl/Kconfig
@@ -0,0 +1,35 @@
+# SPDX-License-Identifier: GPL-2.0-only
+menuconfig CXL_BUS
+ tristate "CXL (Compute Express Link) Devices Support"
+ depends on PCI
+ help
+ CXL is a bus that is electrically compatible with PCI Express, but
+ layers three protocols on that signalling (CXL.io, CXL.cache, and
+ CXL.mem). The CXL.cache protocol allows devices to hold cachelines
+ locally, the CXL.mem protocol allows devices to be fully coherent
+ memory targets, the CXL.io protocol is equivalent to PCI Express.
+ Say 'y' to enable support for the configuration and management of
+ devices supporting these protocols.
+
+if CXL_BUS
+
+config CXL_MEM
+ tristate "CXL.mem: Memory Devices"
+ help
+ The CXL.mem protocol allows a device to act as a provider of
+ "System RAM" and/or "Persistent Memory" that is fully coherent
+ as if the memory was attached to the typical CPU memory
+ controller.
+
+ Say 'y/m' to enable a driver (named "cxl_mem.ko" when built as
+ a module) that will attach to CXL.mem devices for
+ configuration, provisioning, and health monitoring. This
+ driver is required for dynamic provisioning of CXL.mem
+ attached memory which is a prerequisite for persistent memory
+ support. Typically volatile memory is mapped by platform
+ firmware and included in the platform memory map, but in some
+ cases the OS is responsible for mapping that memory. See
+ Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification.
+
+ If unsure say 'm'.
+endif
diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile
new file mode 100644
index 000000000000..4a30f7c3fc4a
--- /dev/null
+++ b/drivers/cxl/Makefile
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_CXL_MEM) += cxl_mem.o
+
+cxl_mem-y := mem.o
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
new file mode 100644
index 000000000000..ce33c5ee77c9
--- /dev/null
+++ b/drivers/cxl/mem.c
@@ -0,0 +1,62 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/io.h>
+#include "pci.h"
+
+static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
+{
+ int pos;
+
+ pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DVSEC);
+ if (!pos)
+ return 0;
+
+ while (pos) {
+ u16 vendor, id;
+
+ pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER1, &vendor);
+ pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER2, &id);
+ if (vendor == PCI_DVSEC_VENDOR_ID_CXL && dvsec == id)
+ return pos;
+
+ pos = pci_find_next_ext_capability(pdev, pos,
+ PCI_EXT_CAP_ID_DVSEC);
+ }
+
+ return 0;
+}
+
+static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct device *dev = &pdev->dev;
+ int regloc;
+
+ regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
+ if (!regloc) {
+ dev_err(dev, "register location dvsec not found\n");
+ return -ENXIO;
+ }
+
+ return 0;
+}
+
+static const struct pci_device_id cxl_mem_pci_tbl[] = {
+ /* PCI class code for CXL.mem Type-3 Devices */
+ { PCI_DEVICE_CLASS((PCI_CLASS_MEMORY_CXL << 8 | CXL_MEMORY_PROGIF), ~0)},
+ { /* terminate list */ },
+};
+MODULE_DEVICE_TABLE(pci, cxl_mem_pci_tbl);
+
+static struct pci_driver cxl_mem_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = cxl_mem_pci_tbl,
+ .probe = cxl_mem_probe,
+ .driver = {
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
+ },
+};
+
+MODULE_LICENSE("GPL v2");
+module_pci_driver(cxl_mem_driver);
diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
new file mode 100644
index 000000000000..e464bea3f4d3
--- /dev/null
+++ b/drivers/cxl/pci.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+#ifndef __CXL_PCI_H__
+#define __CXL_PCI_H__
+
+#define CXL_MEMORY_PROGIF 0x10
+
+/*
+ * See section 8.1 Configuration Space Registers in the CXL 2.0
+ * Specification
+ */
+#define PCI_DVSEC_VENDOR_ID_CXL 0x1E98
+#define PCI_DVSEC_ID_CXL 0x0
+
+#define PCI_DVSEC_ID_CXL_REGLOC_OFFSET 0x8
+
+#endif /* __CXL_PCI_H__ */
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index d8156a5dbee8..766260a9b247 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -51,6 +51,7 @@
#define PCI_BASE_CLASS_MEMORY 0x05
#define PCI_CLASS_MEMORY_RAM 0x0500
#define PCI_CLASS_MEMORY_FLASH 0x0501
+#define PCI_CLASS_MEMORY_CXL 0x0502
#define PCI_CLASS_MEMORY_OTHER 0x0580
#define PCI_BASE_CLASS_BRIDGE 0x06
--
2.30.0
Provide enough functionality to utilize the mailbox of a memory device.
The mailbox is used to interact with the firmware running on the memory
device. The flow is proven with one implemented command, "identify".
Because the class code has already told the driver this is a memory
device and the identify command is mandatory.
CXL devices contain an array of capabilities that describe the
interactions software can have with the device or firmware running on
the device. A CXL compliant device must implement the device status and
the mailbox capability. Additionally, a CXL compliant memory device must
implement the memory device capability. Each of the capabilities can
[will] provide an offset within the MMIO region for interacting with the
CXL device.
The capabilities tell the driver how to find and map the register space
for CXL Memory Devices. The registers are required to utilize the CXL
spec defined mailbox interface. The spec outlines two mailboxes, primary
and secondary. The secondary mailbox is earmarked for system firmware,
and not handled in this driver.
Primary mailboxes are capable of generating an interrupt when submitting
a background command. That implementation is saved for a later time.
Link: https://www.computeexpresslink.org/download-the-specification
Signed-off-by: Ben Widawsky <[email protected]>
Reviewed-by: Dan Williams <[email protected]> (v2)
---
drivers/cxl/cxl.h | 88 ++++++++
drivers/cxl/mem.c | 542 +++++++++++++++++++++++++++++++++++++++++++++-
drivers/cxl/pci.h | 14 ++
3 files changed, 642 insertions(+), 2 deletions(-)
create mode 100644 drivers/cxl/cxl.h
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
new file mode 100644
index 000000000000..9cd9bc79fc48
--- /dev/null
+++ b/drivers/cxl/cxl.h
@@ -0,0 +1,88 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2020 Intel Corporation. */
+
+#ifndef __CXL_H__
+#define __CXL_H__
+
+#include <linux/bitfield.h>
+#include <linux/bitops.h>
+#include <linux/io.h>
+
+/* CXL 2.0 8.2.8.1 Device Capabilities Array Register */
+#define CXLDEV_CAP_ARRAY_OFFSET 0x0
+#define CXLDEV_CAP_ARRAY_CAP_ID 0
+#define CXLDEV_CAP_ARRAY_ID_MASK GENMASK(15, 0)
+#define CXLDEV_CAP_ARRAY_COUNT_MASK GENMASK(47, 32)
+/* CXL 2.0 8.2.8.2.1 CXL Device Capabilities */
+#define CXLDEV_CAP_CAP_ID_DEVICE_STATUS 0x1
+#define CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX 0x2
+#define CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX 0x3
+#define CXLDEV_CAP_CAP_ID_MEMDEV 0x4000
+
+/* CXL 2.0 8.2.8.4 Mailbox Registers */
+#define CXLDEV_MBOX_CAPS_OFFSET 0x00
+#define CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK GENMASK(4, 0)
+#define CXLDEV_MBOX_CTRL_OFFSET 0x04
+#define CXLDEV_MBOX_CTRL_DOORBELL BIT(0)
+#define CXLDEV_MBOX_CMD_OFFSET 0x08
+#define CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK GENMASK(15, 0)
+#define CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK GENMASK(36, 16)
+#define CXLDEV_MBOX_STATUS_OFFSET 0x10
+#define CXLDEV_MBOX_STATUS_RET_CODE_MASK GENMASK(47, 32)
+#define CXLDEV_MBOX_BG_CMD_STATUS_OFFSET 0x18
+#define CXLDEV_MBOX_PAYLOAD_OFFSET 0x20
+
+/* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */
+#define CXLMDEV_STATUS_OFFSET 0x0
+#define CXLMDEV_DEV_FATAL BIT(0)
+#define CXLMDEV_FW_HALT BIT(1)
+#define CXLMDEV_STATUS_MEDIA_STATUS_MASK GENMASK(3, 2)
+#define CXLMDEV_MS_NOT_READY 0
+#define CXLMDEV_MS_READY 1
+#define CXLMDEV_MS_ERROR 2
+#define CXLMDEV_MS_DISABLED 3
+#define CXLMDEV_READY(status) \
+ (FIELD_GET(CXLMDEV_STATUS_MEDIA_STATUS_MASK, status) == \
+ CXLMDEV_MS_READY)
+#define CXLMDEV_MBOX_IF_READY BIT(4)
+#define CXLMDEV_RESET_NEEDED_MASK GENMASK(7, 5)
+#define CXLMDEV_RESET_NEEDED_NOT 0
+#define CXLMDEV_RESET_NEEDED_COLD 1
+#define CXLMDEV_RESET_NEEDED_WARM 2
+#define CXLMDEV_RESET_NEEDED_HOT 3
+#define CXLMDEV_RESET_NEEDED_CXL 4
+#define CXLMDEV_RESET_NEEDED(status) \
+ (FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) != \
+ CXLMDEV_RESET_NEEDED_NOT)
+
+/**
+ * struct cxl_mem - A CXL memory device
+ * @pdev: The PCI device associated with this CXL device.
+ * @regs: IO mappings to the device's MMIO
+ * @status_regs: CXL 2.0 8.2.8.3 Device Status Registers
+ * @mbox_regs: CXL 2.0 8.2.8.4 Mailbox Registers
+ * @memdev_regs: CXL 2.0 8.2.8.5 Memory Device Registers
+ * @payload_size: Size of space for payload
+ * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
+ * @mbox_mutex: Mutex to synchronize mailbox access.
+ * @firmware_version: Firmware version for the memory device.
+ * @pmem: Persistent memory capacity information.
+ * @ram: Volatile memory capacity information.
+ */
+struct cxl_mem {
+ struct pci_dev *pdev;
+ void __iomem *regs;
+
+ void __iomem *status_regs;
+ void __iomem *mbox_regs;
+ void __iomem *memdev_regs;
+
+ size_t payload_size;
+ struct mutex mbox_mutex; /* Protects device mailbox and firmware */
+ char firmware_version[0x10];
+
+ struct range pmem_range;
+ struct range ram_range;
+};
+
+#endif /* __CXL_H__ */
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index ce33c5ee77c9..2b289a311f27 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -4,6 +4,456 @@
#include <linux/pci.h>
#include <linux/io.h>
#include "pci.h"
+#include "cxl.h"
+
+#define cxl_doorbell_busy(cxlm) \
+ (readl((cxlm)->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET) & \
+ CXLDEV_MBOX_CTRL_DOORBELL)
+
+/* CXL 2.0 - 8.2.8.4 */
+#define CXL_MAILBOX_TIMEOUT_MS (2 * HZ)
+
+enum opcode {
+ CXL_MBOX_OP_IDENTIFY = 0x4000,
+ CXL_MBOX_OP_MAX = 0x10000
+};
+
+/**
+ * struct mbox_cmd - A command to be submitted to hardware.
+ * @opcode: (input) The command set and command submitted to hardware.
+ * @payload_in: (input) Pointer to the input payload.
+ * @payload_out: (output) Pointer to the output payload. Must be allocated by
+ * the caller.
+ * @size_in: (input) Number of bytes to load from @payload.
+ * @size_out: (output) Number of bytes loaded into @payload.
+ * @return_code: (output) Error code returned from hardware.
+ *
+ * This is the primary mechanism used to send commands to the hardware.
+ * All the fields except @payload_* correspond exactly to the fields described in
+ * Command Register section of the CXL 2.0 8.2.8.4.5. @payload_in and
+ * @payload_out are written to, and read from the Command Payload Registers
+ * defined in CXL 2.0 8.2.8.4.8.
+ */
+struct mbox_cmd {
+ u16 opcode;
+ void *payload_in;
+ void *payload_out;
+ size_t size_in;
+ size_t size_out;
+ u16 return_code;
+#define CXL_MBOX_SUCCESS 0
+};
+
+static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm)
+{
+ const unsigned long start = jiffies;
+ unsigned long end = start;
+
+ while (cxl_doorbell_busy(cxlm)) {
+ end = jiffies;
+
+ if (time_after(end, start + CXL_MAILBOX_TIMEOUT_MS)) {
+ /* Check again in case preempted before timeout test */
+ if (!cxl_doorbell_busy(cxlm))
+ break;
+ return -ETIMEDOUT;
+ }
+ cpu_relax();
+ }
+
+ dev_dbg(&cxlm->pdev->dev, "Doorbell wait took %dms",
+ jiffies_to_msecs(end) - jiffies_to_msecs(start));
+ return 0;
+}
+
+static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm,
+ struct mbox_cmd *mbox_cmd)
+{
+ struct device *dev = &cxlm->pdev->dev;
+
+ dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n",
+ mbox_cmd->opcode, mbox_cmd->size_in);
+}
+
+/**
+ * __cxl_mem_mbox_send_cmd() - Execute a mailbox command
+ * @cxlm: The CXL memory device to communicate with.
+ * @mbox_cmd: Command to send to the memory device.
+ *
+ * Context: Any context. Expects mbox_mutex to be held.
+ * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success.
+ * Caller should check the return code in @mbox_cmd to make sure it
+ * succeeded.
+ *
+ * This is a generic form of the CXL mailbox send command thus only using the
+ * registers defined by the mailbox capability ID - CXL 2.0 8.2.8.4. Memory
+ * devices, and perhaps other types of CXL devices may have further information
+ * available upon error conditions. Driver facilities wishing to send mailbox
+ * commands should use the wrapper command.
+ *
+ * The CXL spec allows for up to two mailboxes. The intention is for the primary
+ * mailbox to be OS controlled and the secondary mailbox to be used by system
+ * firmware. This allows the OS and firmware to communicate with the device and
+ * not need to coordinate with each other. The driver only uses the primary
+ * mailbox.
+ */
+static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
+ struct mbox_cmd *mbox_cmd)
+{
+ void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET;
+ u64 cmd_reg, status_reg;
+ size_t out_len;
+ int rc;
+
+ lockdep_assert_held(&cxlm->mbox_mutex);
+
+ /*
+ * Here are the steps from 8.2.8.4 of the CXL 2.0 spec.
+ * 1. Caller reads MB Control Register to verify doorbell is clear
+ * 2. Caller writes Command Register
+ * 3. Caller writes Command Payload Registers if input payload is non-empty
+ * 4. Caller writes MB Control Register to set doorbell
+ * 5. Caller either polls for doorbell to be clear or waits for interrupt if configured
+ * 6. Caller reads MB Status Register to fetch Return code
+ * 7. If command successful, Caller reads Command Register to get Payload Length
+ * 8. If output payload is non-empty, host reads Command Payload Registers
+ *
+ * Hardware is free to do whatever it wants before the doorbell is rung,
+ * and isn't allowed to change anything after it clears the doorbell. As
+ * such, steps 2 and 3 can happen in any order, and steps 6, 7, 8 can
+ * also happen in any order (though some orders might not make sense).
+ */
+
+ /* #1 */
+ if (cxl_doorbell_busy(cxlm)) {
+ dev_err_ratelimited(&cxlm->pdev->dev,
+ "Mailbox re-busy after acquiring\n");
+ return -EBUSY;
+ }
+
+ cmd_reg = FIELD_PREP(CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK,
+ mbox_cmd->opcode);
+ if (mbox_cmd->size_in) {
+ if (WARN_ON(!mbox_cmd->payload_in))
+ return -EINVAL;
+
+ cmd_reg |= FIELD_PREP(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK,
+ mbox_cmd->size_in);
+ memcpy_toio(payload, mbox_cmd->payload_in, mbox_cmd->size_in);
+ }
+
+ /* #2, #3 */
+ writeq(cmd_reg, cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
+
+ /* #4 */
+ dev_dbg(&cxlm->pdev->dev, "Sending command\n");
+ writel(CXLDEV_MBOX_CTRL_DOORBELL,
+ cxlm->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET);
+
+ /* #5 */
+ rc = cxl_mem_wait_for_doorbell(cxlm);
+ if (rc == -ETIMEDOUT) {
+ cxl_mem_mbox_timeout(cxlm, mbox_cmd);
+ return rc;
+ }
+
+ /* #6 */
+ status_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_STATUS_OFFSET);
+ mbox_cmd->return_code =
+ FIELD_GET(CXLDEV_MBOX_STATUS_RET_CODE_MASK, status_reg);
+
+ if (mbox_cmd->return_code != 0) {
+ dev_dbg(&cxlm->pdev->dev, "Mailbox operation had an error\n");
+ return 0;
+ }
+
+ /* #7 */
+ cmd_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET);
+ out_len = FIELD_GET(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK, cmd_reg);
+
+ /* #8 */
+ if (out_len && mbox_cmd->payload_out) {
+ size_t n = min_t(size_t, cxlm->payload_size, out_len);
+
+ memcpy_fromio(mbox_cmd->payload_out, payload, n);
+ mbox_cmd->size_out = n;
+ } else {
+ mbox_cmd->size_out = 0;
+ }
+
+ return 0;
+}
+
+/**
+ * cxl_mem_mbox_get() - Acquire exclusive access to the mailbox.
+ * @cxlm: The memory device to gain access to.
+ *
+ * Context: Any context. Takes the mbox_mutex.
+ * Return: 0 if exclusive access was acquired.
+ */
+static int cxl_mem_mbox_get(struct cxl_mem *cxlm)
+{
+ struct device *dev = &cxlm->pdev->dev;
+ int rc = -EBUSY;
+ u64 md_status;
+
+ mutex_lock_io(&cxlm->mbox_mutex);
+
+ /*
+ * XXX: There is some amount of ambiguity in the 2.0 version of the spec
+ * around the mailbox interface ready (8.2.8.5.1.1). The purpose of the
+ * bit is to allow firmware running on the device to notify the driver
+ * that it's ready to receive commands. It is unclear if the bit needs
+ * to be read for each transaction mailbox, ie. the firmware can switch
+ * it on and off as needed. Second, there is no defined timeout for
+ * mailbox ready, like there is for the doorbell interface.
+ *
+ * Assumptions:
+ * 1. The firmware might toggle the Mailbox Interface Ready bit, check
+ * it for every command.
+ *
+ * 2. If the doorbell is clear, the firmware should have first set the
+ * Mailbox Interface Ready bit. Therefore, waiting for the doorbell
+ * to be ready is sufficient.
+ */
+ rc = cxl_mem_wait_for_doorbell(cxlm);
+ if (rc) {
+ dev_warn(dev, "Mailbox interface not ready\n");
+ goto out;
+ }
+
+ md_status = readq(cxlm->memdev_regs + CXLMDEV_STATUS_OFFSET);
+ if (!(md_status & CXLMDEV_MBOX_IF_READY && CXLMDEV_READY(md_status))) {
+ dev_err(dev, "mbox: reported doorbell ready, but not mbox ready\n");
+ goto out;
+ }
+
+ /*
+ * Hardware shouldn't allow a ready status but also have failure bits
+ * set. Spit out an error, this should be a bug report
+ */
+ rc = -EFAULT;
+ if (md_status & CXLMDEV_DEV_FATAL) {
+ dev_err(dev, "mbox: reported ready, but fatal\n");
+ goto out;
+ }
+ if (md_status & CXLMDEV_FW_HALT) {
+ dev_err(dev, "mbox: reported ready, but halted\n");
+ goto out;
+ }
+ if (CXLMDEV_RESET_NEEDED(md_status)) {
+ dev_err(dev, "mbox: reported ready, but reset needed\n");
+ goto out;
+ }
+
+ /* with lock held */
+ return 0;
+
+out:
+ mutex_unlock(&cxlm->mbox_mutex);
+ return rc;
+}
+
+/**
+ * cxl_mem_mbox_put() - Release exclusive access to the mailbox.
+ * @cxlm: The CXL memory device to communicate with.
+ *
+ * Context: Any context. Expects mbox_mutex to be held.
+ */
+static void cxl_mem_mbox_put(struct cxl_mem *cxlm)
+{
+ mutex_unlock(&cxlm->mbox_mutex);
+}
+
+/**
+ * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device.
+ * @cxlm: The CXL memory device to communicate with.
+ * @opcode: Opcode for the mailbox command.
+ * @in: The input payload for the mailbox command.
+ * @in_size: The length of the input payload
+ * @out: Caller allocated buffer for the output.
+ * @out_min_size: Minimum expected size of output.
+ *
+ * Context: Any context. Will acquire and release mbox_mutex.
+ * Return:
+ * * %>=0 - Number of bytes returned in @out.
+ * * %-E2BIG - Payload is too large for hardware.
+ * * %-EBUSY - Couldn't acquire exclusive mailbox access.
+ * * %-EFAULT - Hardware error occurred.
+ * * %-ENXIO - Command completed, but device reported an error.
+ * * %-ENODATA - Not enough payload data returned by hardware.
+ *
+ * Mailbox commands may execute successfully yet the device itself reported an
+ * error. While this distinction can be useful for commands from userspace, the
+ * kernel will only be able to use results when both are successful.
+ *
+ * See __cxl_mem_mbox_send_cmd()
+ */
+static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, u8 *in,
+ size_t in_size, u8 *out, size_t out_min_size)
+{
+ struct mbox_cmd mbox_cmd = {
+ .opcode = opcode,
+ .payload_in = in,
+ .size_in = in_size,
+ .payload_out = out,
+ };
+ int rc;
+
+ if (out_min_size > cxlm->payload_size)
+ return -E2BIG;
+
+ rc = cxl_mem_mbox_get(cxlm);
+ if (rc)
+ return rc;
+
+ rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd);
+ cxl_mem_mbox_put(cxlm);
+ if (rc)
+ return rc;
+
+ /* TODO: Map return code to proper kernel style errno */
+ if (mbox_cmd.return_code != CXL_MBOX_SUCCESS)
+ return -ENXIO;
+
+ if (mbox_cmd.size_out < out_min_size)
+ return -ENODATA;
+
+ return mbox_cmd.size_out;
+}
+
+/**
+ * cxl_mem_setup_regs() - Setup necessary MMIO.
+ * @cxlm: The CXL memory device to communicate with.
+ *
+ * Return: 0 if all necessary registers mapped.
+ *
+ * A memory device is required by spec to implement a certain set of MMIO
+ * regions. The purpose of this function is to enumerate and map those
+ * registers.
+ */
+static int cxl_mem_setup_regs(struct cxl_mem *cxlm)
+{
+ struct device *dev = &cxlm->pdev->dev;
+ int cap, cap_count;
+ u64 cap_array;
+
+ cap_array = readq(cxlm->regs + CXLDEV_CAP_ARRAY_OFFSET);
+ if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) !=
+ CXLDEV_CAP_ARRAY_CAP_ID)
+ return -ENODEV;
+
+ cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array);
+
+ for (cap = 1; cap <= cap_count; cap++) {
+ void __iomem *register_block;
+ u32 offset;
+ u16 cap_id;
+
+ cap_id = readl(cxlm->regs + cap * 0x10) & 0xffff;
+ offset = readl(cxlm->regs + cap * 0x10 + 0x4);
+ register_block = cxlm->regs + offset;
+
+ switch (cap_id) {
+ case CXLDEV_CAP_CAP_ID_DEVICE_STATUS:
+ dev_dbg(dev, "found Status capability (0x%x)\n", offset);
+ cxlm->status_regs = register_block;
+ break;
+ case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX:
+ dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset);
+ cxlm->mbox_regs = register_block;
+ break;
+ case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX:
+ dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset);
+ break;
+ case CXLDEV_CAP_CAP_ID_MEMDEV:
+ dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset);
+ cxlm->memdev_regs = register_block;
+ break;
+ default:
+ dev_dbg(dev, "Unknown cap ID: %d (0x%x)\n", cap_id, offset);
+ break;
+ }
+ }
+
+ if (!cxlm->status_regs || !cxlm->mbox_regs || !cxlm->memdev_regs) {
+ dev_err(dev, "registers not found: %s%s%s\n",
+ !cxlm->status_regs ? "status " : "",
+ !cxlm->mbox_regs ? "mbox " : "",
+ !cxlm->memdev_regs ? "memdev" : "");
+ return -ENXIO;
+ }
+
+ return 0;
+}
+
+static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm)
+{
+ const int cap = readl(cxlm->mbox_regs + CXLDEV_MBOX_CAPS_OFFSET);
+
+ cxlm->payload_size =
+ 1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap);
+
+ /*
+ * CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register
+ *
+ * If the size is too small, mandatory commands will not work and so
+ * there's no point in going forward. If the size is too large, there's
+ * no harm is soft limiting it.
+ */
+ cxlm->payload_size = min_t(size_t, cxlm->payload_size, SZ_1M);
+ if (cxlm->payload_size < 256) {
+ dev_err(&cxlm->pdev->dev, "Mailbox is too small (%zub)",
+ cxlm->payload_size);
+ return -ENXIO;
+ }
+
+ dev_dbg(&cxlm->pdev->dev, "Mailbox payload sized %zu",
+ cxlm->payload_size);
+
+ return 0;
+}
+
+static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo,
+ u32 reg_hi)
+{
+ struct device *dev = &pdev->dev;
+ struct cxl_mem *cxlm;
+ void __iomem *regs;
+ u64 offset;
+ u8 bar;
+ int rc;
+
+ cxlm = devm_kzalloc(&pdev->dev, sizeof(*cxlm), GFP_KERNEL);
+ if (!cxlm) {
+ dev_err(dev, "No memory available\n");
+ return NULL;
+ }
+
+ offset = ((u64)reg_hi << 32) | FIELD_GET(CXL_REGLOC_ADDR_MASK, reg_lo);
+ bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo);
+
+ /* Basic sanity check that BAR is big enough */
+ if (pci_resource_len(pdev, bar) < offset) {
+ dev_err(dev, "BAR%d: %pr: too small (offset: %#llx)\n", bar,
+ &pdev->resource[bar], (unsigned long long)offset);
+ return NULL;
+ }
+
+ rc = pcim_iomap_regions(pdev, BIT(bar), pci_name(pdev));
+ if (rc) {
+ dev_err(dev, "failed to map registers\n");
+ return NULL;
+ }
+ regs = pcim_iomap_table(pdev)[bar];
+
+ mutex_init(&cxlm->mbox_mutex);
+ cxlm->pdev = pdev;
+ cxlm->regs = regs + offset;
+
+ dev_dbg(dev, "Mapped CXL Memory Device resource\n");
+ return cxlm;
+}
static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
{
@@ -28,10 +478,65 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec)
return 0;
}
+/**
+ * cxl_mem_identify() - Send the IDENTIFY command to the device.
+ * @cxlm: The device to identify.
+ *
+ * Return: 0 if identify was executed successfully.
+ *
+ * This will dispatch the identify command to the device and on success populate
+ * structures to be exported to sysfs.
+ */
+static int cxl_mem_identify(struct cxl_mem *cxlm)
+{
+ struct cxl_mbox_identify {
+ char fw_revision[0x10];
+ __le64 total_capacity;
+ __le64 volatile_capacity;
+ __le64 persistent_capacity;
+ __le64 partition_align;
+ __le16 info_event_log_size;
+ __le16 warning_event_log_size;
+ __le16 failure_event_log_size;
+ __le16 fatal_event_log_size;
+ __le32 lsa_size;
+ u8 poison_list_max_mer[3];
+ __le16 inject_poison_limit;
+ u8 poison_caps;
+ u8 qos_telemetry_caps;
+ } __packed id;
+ int rc;
+
+ rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0,
+ (u8 *)&id, sizeof(id));
+ if (rc < 0)
+ return rc;
+
+ /*
+ * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias.
+ * For now, only the capacity is exported in sysfs
+ */
+ cxlm->ram_range.start = 0;
+ cxlm->ram_range.end = le64_to_cpu(id.volatile_capacity) - 1;
+
+ cxlm->pmem_range.start = 0;
+ cxlm->pmem_range.end = le64_to_cpu(id.persistent_capacity) - 1;
+
+ memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision));
+
+ return 0;
+}
+
static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct device *dev = &pdev->dev;
- int regloc;
+ struct cxl_mem *cxlm = NULL;
+ int rc, regloc, i;
+ u32 regloc_size;
+
+ rc = pcim_enable_device(pdev);
+ if (rc)
+ return rc;
regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET);
if (!regloc) {
@@ -39,7 +544,40 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return -ENXIO;
}
- return 0;
+ /* Get the size of the Register Locator DVSEC */
+ pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, ®loc_size);
+ regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size);
+
+ regloc += PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET;
+
+ for (i = regloc; i < regloc + regloc_size; i += 8) {
+ u32 reg_lo, reg_hi;
+ u8 reg_type;
+
+ /* "register low and high" contain other bits */
+ pci_read_config_dword(pdev, i, ®_lo);
+ pci_read_config_dword(pdev, i + 4, ®_hi);
+
+ reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo);
+
+ if (reg_type == CXL_REGLOC_RBI_MEMDEV) {
+ cxlm = cxl_mem_create(pdev, reg_lo, reg_hi);
+ break;
+ }
+ }
+
+ if (!cxlm)
+ return -ENODEV;
+
+ rc = cxl_mem_setup_regs(cxlm);
+ if (rc)
+ return rc;
+
+ rc = cxl_mem_setup_mailbox(cxlm);
+ if (rc)
+ return rc;
+
+ return cxl_mem_identify(cxlm);
}
static const struct pci_device_id cxl_mem_pci_tbl[] = {
diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h
index e464bea3f4d3..af3ec078cf6c 100644
--- a/drivers/cxl/pci.h
+++ b/drivers/cxl/pci.h
@@ -9,9 +9,23 @@
* See section 8.1 Configuration Space Registers in the CXL 2.0
* Specification
*/
+#define PCI_DVSEC_HEADER1_LENGTH_MASK GENMASK(31, 20)
#define PCI_DVSEC_VENDOR_ID_CXL 0x1E98
#define PCI_DVSEC_ID_CXL 0x0
#define PCI_DVSEC_ID_CXL_REGLOC_OFFSET 0x8
+#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET 0xC
+
+/* BAR Indicator Register (BIR) */
+#define CXL_REGLOC_BIR_MASK GENMASK(2, 0)
+
+/* Register Block Identifier (RBI) */
+#define CXL_REGLOC_RBI_MASK GENMASK(15, 8)
+#define CXL_REGLOC_RBI_EMPTY 0
+#define CXL_REGLOC_RBI_COMPONENT 1
+#define CXL_REGLOC_RBI_VIRT 2
+#define CXL_REGLOC_RBI_MEMDEV 3
+
+#define CXL_REGLOC_ADDR_MASK GENMASK(31, 16)
#endif /* __CXL_PCI_H__ */
--
2.30.0
It's often useful in debug scenarios to see what the hardware has dumped
out. As it stands today, any device error will result in the payload not
being copied out, so there is no way to triage commands which weren't
expected to fail (and sometimes the payload may have that information).
The functionality is protected by normal kernel security mechanisms as
well as a CONFIG option in the CXL driver.
This was extracted from the original version of the CXL enabling patch
series.
Signed-off-by: Ben Widawsky <[email protected]>
---
drivers/cxl/Kconfig | 13 +++++++++++++
drivers/cxl/mem.c | 8 ++++++++
2 files changed, 21 insertions(+)
diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig
index 97dc4d751651..3eec9276e586 100644
--- a/drivers/cxl/Kconfig
+++ b/drivers/cxl/Kconfig
@@ -50,4 +50,17 @@ config CXL_MEM_RAW_COMMANDS
potential impact to memory currently in use by the kernel.
If developing CXL hardware or the driver say Y, otherwise say N.
+
+config CXL_MEM_INSECURE_DEBUG
+ bool "CXL.mem debugging"
+ depends on CXL_MEM
+ help
+ Enable debug of all CXL command payloads.
+
+ Some CXL devices and controllers support encryption and other
+ security features. The payloads for the commands that enable
+ those features may contain sensitive clear-text security
+ material. Disable debug of those command payloads by default.
+ If you are a kernel developer actively working on CXL
+ security enabling say Y, otherwise say N.
endif
diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
index 3bca8451348a..09c11935b824 100644
--- a/drivers/cxl/mem.c
+++ b/drivers/cxl/mem.c
@@ -341,6 +341,14 @@ static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm,
/* #5 */
rc = cxl_mem_wait_for_doorbell(cxlm);
+
+ if (!cxl_is_security_command(mbox_cmd->opcode) ||
+ IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) {
+ print_hex_dump_debug("Payload ", DUMP_PREFIX_OFFSET, 16, 1,
+ mbox_cmd->payload_in, mbox_cmd->size_in,
+ true);
+ }
+
if (rc == -ETIMEDOUT) {
cxl_mem_mbox_timeout(cxlm, mbox_cmd);
return rc;
--
2.30.0