2023-03-06 16:44:31

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 00/16] Enable DSA 2.0 Event Log and completion record faulting features

Applications can send 64B descriptors to the DSA device via CPU
instructions MOVDIR64B or ENQCMD. The application can choose to have
the device write back a completion record (CR) in system memory to
indicate the status of the descriptor submitted on completion.

With the DSA hardware, the device is able to do on demand paging through
the hardware by faulting in the user pages that do not have physical memory
page backing with assistance from IOMMU. In the spec this was designated as
the block on fault feature. While this hardware feature made operation
simpler, it also stalls the device engines while the memory pages are being
faulted in through Page Request Service (PRS). For applications sharing the
same workqueue (wq) or wqs in the same group, operations are stalled if
there are no free engines. To avoid slowing the performance of all other
running applications sharing the same device engine(s), PRS can to be
disabled and software can deal with partial completion.

The block on fault feature on DSA 1.0 can be disabled for the wq. However,
PRS is not completely disabled for the whole path. It is not disabled for
CRs or batch list for a batch operation.

The other issue is the DSA 1.0 error reporting mechanism, SWERROR register.
The SWERROR register can only report a single error at a time until the
driver reads and acknowledges the error. The follow on errors cannot be
reported until the current error is "cleared" by the software by writing
a bit to the SWERR register. If a large number of faults arrive and the
software cannot clear them fast enough, overflowed errors will be dropped
by the device.

A CR is the optional 32 bytes (DSA) or 64 bytes (IAA) status that is
written back for a submitted descriptor. If the address for the CR faults,
the error is reported to the SWERROR register instead.

With DSA 2.0 hardware [1], the event log feature is added. All errors are
reported as an entry in a circular buffer reside in the system memory.
The system admin is responsible to configure the size of the circular
buffer large enough per device to handle the potential errors that may be
reported. If the buffer is full and another error needs to be reported,
the device engine will block until there's a free slot in the buffer.
An event log entry for a faulted CR will contain the error information,
the CR address that faulted, and the expected CR content the device had
originally intended to write.

DSA 2.0 also introduces per wq PRS disable knob. This will disable all PRS
operations for the specific wq. The device will still have Address
Translation Service (ATS) on. When ATS fails on a memory address for a CR,
an eventlog entry will be written by the hardware into the event log
ring buffer. The driver software is expected to parse the event log entry,
fault in the address of the CR, and the write the content of the CR to
the memory address.

This patch series will implement the DSA 2 event log support. The support
for the handling of the faulted user CR is added. The driver is also
adding the same support for batch operation descriptors. With a batch
operation the handling of the event log entry is a bit more complex.
The faulting CR could be for the batch descriptor or any of the operation
descriptors within the batch. The hardware generates a batch identifier
that is used by the driver software to correlate the event log entries for
the relevant descriptors of that batch.

The faulting of source and destination addresses for the operation is not
handled by the driver. That is left to be handled by the user application
by faulting in the memory and re-submit the remaining operation.

This series consists of three parts:
1. Patch 1: Make misc interrupt one shot. Event Log interrupt depends on
this patch. This patch was released before but is not in upstream yet:
https://lore.kernel.org/dmaengine/165125374675.311834.10460196228320964350.stgit@djiang5-desk3.ch.intel.com/
2. Patches 2-15: Enable Event Log and Completion Record faulting.
3. Patch 16: Configure PRS disable per WQ.

This series is applied cleanly on top of "Expose IAA 2.0 device
capabilities" series:
https://lore.kernel.org/lkml/[email protected]/

Change log:
v2:
- Define and export iommu_access_remote_vm() for IDXD driver to write
completion record to user address space. This change removes
patch 8 and 9 in v1 (Alistair Popple)

Dave Jiang (15):
dmaengine: idxd: make misc interrupt one shot
dmaengine: idxd: add event log size sysfs attribute
dmaengine: idxd: setup event log configuration
dmaengine: idxd: add interrupt handling for event log
dmanegine: idxd: add debugfs for event log dump
dmaengine: idxd: add per DSA wq workqueue for processing cr faults
dmaengine: idxd: create kmem cache for event log fault items
dmaengine: idxd: process user page faults for completion record
dmaengine: idxd: add descs_completed field for completion record
dmaengine: idxd: process batch descriptor completion record faults
dmaengine: idxd: add per file user counters for completion record
faults
dmaengine: idxd: add a device to represent the file opened
dmaengine: idxd: expose fault counters to sysfs
dmaengine: idxd: add pid to exported sysfs attribute for opened file
dmaengine: idxd: add per wq PRS disable

Fenghua Yu (1):
iommu: define and export iommu_access_remote_vm()

.../ABI/stable/sysfs-driver-dma-idxd | 43 +++
drivers/dma/Kconfig | 1 +
drivers/dma/idxd/Makefile | 2 +-
drivers/dma/idxd/cdev.c | 264 ++++++++++++++++--
drivers/dma/idxd/debugfs.c | 138 +++++++++
drivers/dma/idxd/device.c | 113 +++++++-
drivers/dma/idxd/idxd.h | 63 +++++
drivers/dma/idxd/init.c | 53 ++++
drivers/dma/idxd/irq.c | 203 ++++++++++++--
drivers/dma/idxd/registers.h | 105 ++++++-
drivers/dma/idxd/sysfs.c | 112 +++++++-
drivers/iommu/iommu-sva.c | 35 +++
include/linux/iommu.h | 9 +
include/uapi/linux/idxd.h | 15 +-
14 files changed, 1091 insertions(+), 65 deletions(-)
create mode 100644 drivers/dma/idxd/debugfs.c

--
2.37.1



2023-03-06 16:44:34

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 03/16] dmaengine: idxd: setup event log configuration

From: Dave Jiang <[email protected]>

Add setup of event log feature for supported device. Event log addresses
error reporting that was lacking in gen 1 DSA devices where a second error
event does not get reported when a first event is pending software
handling. The event log allows a circular buffer that the device can push
error events to. It is up to the user to create a large enough event log
ring in order to capture the expected events. The evl size can be set in
the device sysfs attribute. By default 64 entries are supported as minimal
when event log is enabled.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
drivers/dma/idxd/device.c | 89 +++++++++++++++++++++++++++++++++++-
drivers/dma/idxd/idxd.h | 19 ++++++++
drivers/dma/idxd/init.c | 1 +
drivers/dma/idxd/registers.h | 72 ++++++++++++++++++++++++++++-
drivers/dma/idxd/sysfs.c | 3 +-
include/uapi/linux/idxd.h | 1 +
6 files changed, 181 insertions(+), 4 deletions(-)

diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
index 5f321f3b4242..230fe9bb56ae 100644
--- a/drivers/dma/idxd/device.c
+++ b/drivers/dma/idxd/device.c
@@ -752,6 +752,83 @@ void idxd_device_clear_state(struct idxd_device *idxd)
spin_unlock(&idxd->dev_lock);
}

+static int idxd_device_evl_setup(struct idxd_device *idxd)
+{
+ union gencfg_reg gencfg;
+ union evlcfg_reg evlcfg;
+ union genctrl_reg genctrl;
+ struct device *dev = &idxd->pdev->dev;
+ void *addr;
+ dma_addr_t dma_addr;
+ int size;
+ struct idxd_evl *evl = idxd->evl;
+
+ if (!evl)
+ return 0;
+
+ size = evl_size(idxd);
+ /*
+ * Address needs to be page aligned. However, dma_alloc_coherent() provides
+ * at minimal page size aligned address. No manual alignment required.
+ */
+ addr = dma_alloc_coherent(dev, size, &dma_addr, GFP_KERNEL);
+ if (!addr)
+ return -ENOMEM;
+
+ memset(addr, 0, size);
+
+ spin_lock(&evl->lock);
+ evl->log = addr;
+ evl->dma = dma_addr;
+ evl->log_size = size;
+
+ memset(&evlcfg, 0, sizeof(evlcfg));
+ evlcfg.bits[0] = dma_addr & GENMASK(63, 12);
+ evlcfg.size = evl->size;
+
+ iowrite64(evlcfg.bits[0], idxd->reg_base + IDXD_EVLCFG_OFFSET);
+ iowrite64(evlcfg.bits[1], idxd->reg_base + IDXD_EVLCFG_OFFSET + 8);
+
+ genctrl.bits = ioread32(idxd->reg_base + IDXD_GENCTRL_OFFSET);
+ genctrl.evl_int_en = 1;
+ iowrite32(genctrl.bits, idxd->reg_base + IDXD_GENCTRL_OFFSET);
+
+ gencfg.bits = ioread32(idxd->reg_base + IDXD_GENCFG_OFFSET);
+ gencfg.evl_en = 1;
+ iowrite32(gencfg.bits, idxd->reg_base + IDXD_GENCFG_OFFSET);
+
+ spin_unlock(&evl->lock);
+ return 0;
+}
+
+static void idxd_device_evl_free(struct idxd_device *idxd)
+{
+ union gencfg_reg gencfg;
+ union genctrl_reg genctrl;
+ struct device *dev = &idxd->pdev->dev;
+ struct idxd_evl *evl = idxd->evl;
+
+ gencfg.bits = ioread32(idxd->reg_base + IDXD_GENCFG_OFFSET);
+ if (!gencfg.evl_en)
+ return;
+
+ spin_lock(&evl->lock);
+ gencfg.evl_en = 0;
+ iowrite32(gencfg.bits, idxd->reg_base + IDXD_GENCFG_OFFSET);
+
+ genctrl.bits = ioread32(idxd->reg_base + IDXD_GENCTRL_OFFSET);
+ genctrl.evl_int_en = 0;
+ iowrite32(genctrl.bits, idxd->reg_base + IDXD_GENCTRL_OFFSET);
+
+ iowrite64(0, idxd->reg_base + IDXD_EVLCFG_OFFSET);
+ iowrite64(0, idxd->reg_base + IDXD_EVLCFG_OFFSET + 8);
+
+ dma_free_coherent(dev, evl->log_size, evl->log, evl->dma);
+ evl->log = NULL;
+ evl->size = IDXD_EVL_SIZE_MIN;
+ spin_unlock(&evl->lock);
+}
+
static void idxd_group_config_write(struct idxd_group *group)
{
struct idxd_device *idxd = group->idxd;
@@ -1451,15 +1528,24 @@ int idxd_device_drv_probe(struct idxd_dev *idxd_dev)
if (rc < 0)
return -ENXIO;

+ rc = idxd_device_evl_setup(idxd);
+ if (rc < 0) {
+ idxd->cmd_status = IDXD_SCMD_DEV_EVL_ERR;
+ return rc;
+ }
+
/* Start device */
rc = idxd_device_enable(idxd);
- if (rc < 0)
+ if (rc < 0) {
+ idxd_device_evl_free(idxd);
return rc;
+ }

/* Setup DMA device without channels */
rc = idxd_register_dma_device(idxd);
if (rc < 0) {
idxd_device_disable(idxd);
+ idxd_device_evl_free(idxd);
idxd->cmd_status = IDXD_SCMD_DEV_DMA_ERR;
return rc;
}
@@ -1488,6 +1574,7 @@ void idxd_device_drv_remove(struct idxd_dev *idxd_dev)
idxd_device_disable(idxd);
if (test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
idxd_device_reset(idxd);
+ idxd_device_evl_free(idxd);
}

static enum idxd_dev_type dev_types[] = {
diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
index 2a71273f1822..c74681f02b18 100644
--- a/drivers/dma/idxd/idxd.h
+++ b/drivers/dma/idxd/idxd.h
@@ -262,7 +262,15 @@ struct idxd_driver_data {
};

struct idxd_evl {
+ /* Lock to protect event log access. */
+ spinlock_t lock;
+ void *log;
+ dma_addr_t dma;
+ /* Total size of event log = number of entries * entry size. */
+ unsigned int log_size;
+ /* The number of entries in the event log. */
u16 size;
+ u16 head;
};

struct idxd_device {
@@ -324,6 +332,17 @@ struct idxd_device {
struct idxd_evl *evl;
};

+static inline unsigned int evl_ent_size(struct idxd_device *idxd)
+{
+ return idxd->hw.gen_cap.evl_support ?
+ (32 * (1 << idxd->hw.gen_cap.evl_support)) : 0;
+}
+
+static inline unsigned int evl_size(struct idxd_device *idxd)
+{
+ return idxd->evl->size * evl_ent_size(idxd);
+}
+
/* IDXD software descriptor */
struct idxd_desc {
union {
diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
index d1fb01c115d8..2ffeb2f3a2c8 100644
--- a/drivers/dma/idxd/init.c
+++ b/drivers/dma/idxd/init.c
@@ -344,6 +344,7 @@ static int idxd_init_evl(struct idxd_device *idxd)
if (!evl)
return -ENOMEM;

+ spin_lock_init(&evl->lock);
evl->size = IDXD_EVL_SIZE_MIN;
idxd->evl = evl;
return 0;
diff --git a/drivers/dma/idxd/registers.h b/drivers/dma/idxd/registers.h
index ea3a499a3c3c..11bb97cf7481 100644
--- a/drivers/dma/idxd/registers.h
+++ b/drivers/dma/idxd/registers.h
@@ -3,6 +3,8 @@
#ifndef _IDXD_REGISTERS_H_
#define _IDXD_REGISTERS_H_

+#include <uapi/linux/idxd.h>
+
/* PCI Config */
#define PCI_DEVICE_ID_INTEL_DSA_SPR0 0x0b25
#define PCI_DEVICE_ID_INTEL_IAX_SPR0 0x0cfe
@@ -119,7 +121,8 @@ union gencfg_reg {
u32 rdbuf_limit:8;
u32 rsvd:4;
u32 user_int_en:1;
- u32 rsvd2:19;
+ u32 evl_en:1;
+ u32 rsvd2:18;
};
u32 bits;
} __packed;
@@ -129,7 +132,8 @@ union genctrl_reg {
struct {
u32 softerr_int_en:1;
u32 halt_int_en:1;
- u32 rsvd:30;
+ u32 evl_int_en:1;
+ u32 rsvd:29;
};
u32 bits;
} __packed;
@@ -299,6 +303,21 @@ union iaa_cap_reg {

#define IDXD_IAACAP_OFFSET 0x180

+#define IDXD_EVLCFG_OFFSET 0xe0
+union evlcfg_reg {
+ struct {
+ u64 pasid_en:1;
+ u64 priv:1;
+ u64 rsvd:10;
+ u64 base_addr:52;
+
+ u64 size:16;
+ u64 pasid:20;
+ u64 rsvd2:28;
+ };
+ u64 bits[2];
+} __packed;
+
#define IDXD_EVL_SIZE_MIN 0x0040
#define IDXD_EVL_SIZE_MAX 0xffff

@@ -539,4 +558,53 @@ union filter_cfg {
u64 val;
} __packed;

+struct __evl_entry {
+ u64 rsvd:2;
+ u64 desc_valid:1;
+ u64 wq_idx_valid:1;
+ u64 batch:1;
+ u64 fault_rw:1;
+ u64 priv:1;
+ u64 err_info_valid:1;
+ u64 error:8;
+ u64 wq_idx:8;
+ u64 batch_id:8;
+ u64 operation:8;
+ u64 pasid:20;
+ u64 rsvd2:4;
+
+ u16 batch_idx;
+ u16 rsvd3;
+ union {
+ /* Invalid Flags 0x11 */
+ u32 invalid_flags;
+ /* Invalid Int Handle 0x19 */
+ /* Page fault 0x1a */
+ /* Page fault 0x06, 0x1f, only operand_id */
+ /* Page fault before drain or in batch, 0x26, 0x27 */
+ struct {
+ u16 int_handle;
+ u16 rci:1;
+ u16 ims:1;
+ u16 rcr:1;
+ u16 first_err_in_batch:1;
+ u16 rsvd4_2:9;
+ u16 operand_id:3;
+ };
+ };
+ u64 fault_addr;
+ u64 rsvd5;
+} __packed;
+
+struct dsa_evl_entry {
+ struct __evl_entry e;
+ struct dsa_completion_record cr;
+} __packed;
+
+struct iax_evl_entry {
+ struct __evl_entry e;
+ u64 rsvd[4];
+ struct iax_completion_record cr;
+} __packed;
+
#endif
diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
index 85644e5bde83..163fdfaa5022 100644
--- a/drivers/dma/idxd/sysfs.c
+++ b/drivers/dma/idxd/sysfs.c
@@ -1605,7 +1605,8 @@ static ssize_t event_log_size_store(struct device *dev,
if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
return -EPERM;

- if (val < IDXD_EVL_SIZE_MIN || val > IDXD_EVL_SIZE_MAX)
+ if (val < IDXD_EVL_SIZE_MIN || val > IDXD_EVL_SIZE_MAX ||
+ (val * evl_ent_size(idxd) > ULONG_MAX - idxd->evl->dma))
return -EINVAL;

idxd->evl->size = val;
diff --git a/include/uapi/linux/idxd.h b/include/uapi/linux/idxd.h
index fc47635b57dc..5d05bf12f2bd 100644
--- a/include/uapi/linux/idxd.h
+++ b/include/uapi/linux/idxd.h
@@ -30,6 +30,7 @@ enum idxd_scmd_stat {
IDXD_SCMD_WQ_NO_PRIV = 0x800f0000,
IDXD_SCMD_WQ_IRQ_ERR = 0x80100000,
IDXD_SCMD_WQ_USER_NO_IOMMU = 0x80110000,
+ IDXD_SCMD_DEV_EVL_ERR = 0x80120000,
};

#define IDXD_SCMD_SOFTERR_MASK 0x80000000
--
2.37.1


2023-03-06 16:45:53

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 05/16] dmanegine: idxd: add debugfs for event log dump

From: Dave Jiang <[email protected]>

Add debugfs entry to dump the content of the event log for debugging. The
function will dump all non-zero entries in the event log. It will note
which entries are processed and which entries are still pending processing
at the time of the dump. The entries may not always be in chronological
order due to the log is a circular buffer.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
drivers/dma/idxd/Makefile | 2 +-
drivers/dma/idxd/debugfs.c | 138 +++++++++++++++++++++++++++++++++++++
drivers/dma/idxd/idxd.h | 9 +++
drivers/dma/idxd/init.c | 12 ++++
include/uapi/linux/idxd.h | 6 +-
5 files changed, 164 insertions(+), 3 deletions(-)
create mode 100644 drivers/dma/idxd/debugfs.c

diff --git a/drivers/dma/idxd/Makefile b/drivers/dma/idxd/Makefile
index a1e9f2b3a37c..dc096839ac63 100644
--- a/drivers/dma/idxd/Makefile
+++ b/drivers/dma/idxd/Makefile
@@ -1,7 +1,7 @@
ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=IDXD

obj-$(CONFIG_INTEL_IDXD) += idxd.o
-idxd-y := init.o irq.o device.o sysfs.o submit.o dma.o cdev.o
+idxd-y := init.o irq.o device.o sysfs.o submit.o dma.o cdev.o debugfs.o

idxd-$(CONFIG_INTEL_IDXD_PERFMON) += perfmon.o

diff --git a/drivers/dma/idxd/debugfs.c b/drivers/dma/idxd/debugfs.c
new file mode 100644
index 000000000000..9cfbd9b14c4c
--- /dev/null
+++ b/drivers/dma/idxd/debugfs.c
@@ -0,0 +1,138 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Intel Corporation. All rights rsvd. */
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/debugfs.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <uapi/linux/idxd.h>
+#include "idxd.h"
+#include "registers.h"
+
+static struct dentry *idxd_debugfs_dir;
+
+static void dump_event_entry(struct idxd_device *idxd, struct seq_file *s,
+ u16 index, int *count, bool processed)
+{
+ struct idxd_evl *evl = idxd->evl;
+ struct dsa_evl_entry *entry;
+ struct dsa_completion_record *cr;
+ u64 *raw;
+ int i;
+ int evl_strides = evl_ent_size(idxd) / sizeof(u64);
+
+ entry = (struct dsa_evl_entry *)evl->log + index;
+
+ if (!entry->e.desc_valid)
+ return;
+
+ seq_printf(s, "Event Log entry %d (real index %u) processed: %u\n",
+ *count, index, processed);
+
+ seq_printf(s, "desc valid %u wq idx valid %u\n"
+ "batch %u fault rw %u priv %u error 0x%x\n"
+ "wq idx %u op %#x pasid %u batch idx %u\n"
+ "fault addr %#llx\n",
+ entry->e.desc_valid, entry->e.wq_idx_valid,
+ entry->e.batch, entry->e.fault_rw, entry->e.priv,
+ entry->e.error, entry->e.wq_idx, entry->e.operation,
+ entry->e.pasid, entry->e.batch_idx, entry->e.fault_addr);
+
+ cr = &entry->cr;
+ seq_printf(s, "status %#x result %#x fault_info %#x bytes_completed %u\n"
+ "fault addr %#llx inv flags %#x\n\n",
+ cr->status, cr->result, cr->fault_info, cr->bytes_completed,
+ cr->fault_addr, cr->invalid_flags);
+
+ raw = (u64 *)entry;
+
+ for (i = 0; i < evl_strides; i++)
+ seq_printf(s, "entry[%d] = %#llx\n", i, raw[i]);
+
+ seq_puts(s, "\n");
+ *count += 1;
+}
+
+static int debugfs_evl_show(struct seq_file *s, void *d)
+{
+ struct idxd_device *idxd = s->private;
+ struct idxd_evl *evl = idxd->evl;
+ union evl_status_reg evl_status;
+ u16 h, t, evl_size, i;
+ int count = 0;
+ bool processed = true;
+
+ if (!evl || !evl->log)
+ return 0;
+
+ spin_lock(&evl->lock);
+
+ h = evl->head;
+ evl_status.bits = ioread64(idxd->reg_base + IDXD_EVLSTATUS_OFFSET);
+ t = evl_status.tail;
+ evl_size = evl->size;
+
+ seq_printf(s, "Event Log head %u tail %u interrupt pending %u\n\n",
+ evl_status.head, evl_status.tail, evl_status.int_pending);
+
+ i = t;
+ while (1) {
+ i = (i + 1) % evl_size;
+ if (i == t)
+ break;
+
+ if (processed && i == h)
+ processed = false;
+ dump_event_entry(idxd, s, i, &count, processed);
+ }
+
+ spin_unlock(&evl->lock);
+ return 0;
+}
+
+DEFINE_SHOW_ATTRIBUTE(debugfs_evl);
+
+int idxd_device_init_debugfs(struct idxd_device *idxd)
+{
+ if (IS_ERR_OR_NULL(idxd_debugfs_dir))
+ return 0;
+
+ idxd->dbgfs_dir = debugfs_create_dir(dev_name(idxd_confdev(idxd)), idxd_debugfs_dir);
+ if (IS_ERR(idxd->dbgfs_dir))
+ return PTR_ERR(idxd->dbgfs_dir);
+
+ if (idxd->evl) {
+ idxd->dbgfs_evl_file = debugfs_create_file("event_log", 0400,
+ idxd->dbgfs_dir, idxd,
+ &debugfs_evl_fops);
+ if (IS_ERR(idxd->dbgfs_evl_file)) {
+ debugfs_remove_recursive(idxd->dbgfs_dir);
+ idxd->dbgfs_dir = NULL;
+ return PTR_ERR(idxd->dbgfs_evl_file);
+ }
+ }
+
+ return 0;
+}
+
+void idxd_device_remove_debugfs(struct idxd_device *idxd)
+{
+ debugfs_remove_recursive(idxd->dbgfs_dir);
+}
+
+int idxd_init_debugfs(void)
+{
+ if (!debugfs_initialized())
+ return 0;
+
+ idxd_debugfs_dir = debugfs_create_dir(KBUILD_MODNAME, NULL);
+ if (IS_ERR(idxd_debugfs_dir))
+ return PTR_ERR(idxd_debugfs_dir);
+ return 0;
+}
+
+void idxd_remove_debugfs(void)
+{
+ debugfs_remove_recursive(idxd_debugfs_dir);
+}
diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
index c74681f02b18..b923b90b7299 100644
--- a/drivers/dma/idxd/idxd.h
+++ b/drivers/dma/idxd/idxd.h
@@ -330,6 +330,9 @@ struct idxd_device {

unsigned long *opcap_bmap;
struct idxd_evl *evl;
+
+ struct dentry *dbgfs_dir;
+ struct dentry *dbgfs_evl_file;
};

static inline unsigned int evl_ent_size(struct idxd_device *idxd)
@@ -704,4 +707,10 @@ static inline void perfmon_init(void) {}
static inline void perfmon_exit(void) {}
#endif

+/* debugfs */
+int idxd_device_init_debugfs(struct idxd_device *idxd);
+void idxd_device_remove_debugfs(struct idxd_device *idxd);
+int idxd_init_debugfs(void);
+void idxd_remove_debugfs(void);
+
#endif
diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
index 2ffeb2f3a2c8..d19bc6389221 100644
--- a/drivers/dma/idxd/init.c
+++ b/drivers/dma/idxd/init.c
@@ -670,6 +670,10 @@ static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto err_dev_register;
}

+ rc = idxd_device_init_debugfs(idxd);
+ if (rc)
+ dev_warn(dev, "IDXD debugfs failed to setup\n");
+
dev_info(&pdev->dev, "Intel(R) Accelerator Device (v%x)\n",
idxd->hw.version);

@@ -732,6 +736,7 @@ static void idxd_remove(struct pci_dev *pdev)
idxd_shutdown(pdev);
if (device_pasid_enabled(idxd))
idxd_disable_system_pasid(idxd);
+ idxd_device_remove_debugfs(idxd);

irq_entry = idxd_get_ie(idxd, 0);
free_irq(irq_entry->vector, irq_entry);
@@ -789,6 +794,10 @@ static int __init idxd_init_module(void)
if (err)
goto err_cdev_register;

+ err = idxd_init_debugfs();
+ if (err)
+ goto err_debugfs;
+
err = pci_register_driver(&idxd_pci_driver);
if (err)
goto err_pci_register;
@@ -796,6 +805,8 @@ static int __init idxd_init_module(void)
return 0;

err_pci_register:
+ idxd_remove_debugfs();
+err_debugfs:
idxd_cdev_remove();
err_cdev_register:
idxd_driver_unregister(&idxd_user_drv);
@@ -816,5 +827,6 @@ static void __exit idxd_exit_module(void)
pci_unregister_driver(&idxd_pci_driver);
idxd_cdev_remove();
perfmon_exit();
+ idxd_remove_debugfs();
}
module_exit(idxd_exit_module);
diff --git a/include/uapi/linux/idxd.h b/include/uapi/linux/idxd.h
index 0bc8eea18586..e86199d09a91 100644
--- a/include/uapi/linux/idxd.h
+++ b/include/uapi/linux/idxd.h
@@ -311,7 +311,8 @@ struct dsa_completion_record {
uint8_t result;
uint8_t dif_status;
};
- uint16_t rsvd;
+ uint8_t fault_info;
+ uint8_t rsvd;
uint32_t bytes_completed;
uint64_t fault_addr;
union {
@@ -368,7 +369,8 @@ struct dsa_raw_completion_record {
struct iax_completion_record {
volatile uint8_t status;
uint8_t error_code;
- uint16_t rsvd;
+ uint8_t fault_info;
+ uint8_t rsvd;
uint32_t bytes_completed;
uint64_t fault_addr;
uint32_t invalid_flags;
--
2.37.1


2023-03-06 16:46:01

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 07/16] dmaengine: idxd: create kmem cache for event log fault items

From: Dave Jiang <[email protected]>

Add a kmem cache per device for allocating event log fault context. The
context allows an event log entry to be copied and passed to a software
workqueue to be processed. Due to each device can have different sized
event log entry depending on device type, it's not possible to have a
global kmem cache.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
drivers/dma/idxd/idxd.h | 10 ++++++++++
drivers/dma/idxd/init.c | 9 +++++++++
drivers/dma/idxd/sysfs.c | 1 +
3 files changed, 20 insertions(+)

diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
index 6e56361ae658..c5d99c179902 100644
--- a/drivers/dma/idxd/idxd.h
+++ b/drivers/dma/idxd/idxd.h
@@ -274,6 +274,15 @@ struct idxd_evl {
u16 head;
};

+struct idxd_evl_fault {
+ struct work_struct work;
+ struct idxd_wq *wq;
+ u8 status;
+
+ /* make this last member always */
+ struct __evl_entry entry[];
+};
+
struct idxd_device {
struct idxd_dev idxd_dev;
struct idxd_driver_data *data;
@@ -331,6 +340,7 @@ struct idxd_device {

unsigned long *opcap_bmap;
struct idxd_evl *evl;
+ struct kmem_cache *evl_cache;

struct dentry *dbgfs_dir;
struct dentry *dbgfs_evl_file;
diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
index d19bc6389221..a7c98fac7a85 100644
--- a/drivers/dma/idxd/init.c
+++ b/drivers/dma/idxd/init.c
@@ -346,6 +346,15 @@ static int idxd_init_evl(struct idxd_device *idxd)

spin_lock_init(&evl->lock);
evl->size = IDXD_EVL_SIZE_MIN;
+
+ idxd->evl_cache = kmem_cache_create(dev_name(idxd_confdev(idxd)),
+ sizeof(struct idxd_evl_fault) + evl_ent_size(idxd),
+ 0, 0, NULL);
+ if (!idxd->evl_cache) {
+ kfree(evl);
+ return -ENOMEM;
+ }
+
idxd->evl = evl;
return 0;
}
diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
index 163fdfaa5022..8b9dfa0d2b99 100644
--- a/drivers/dma/idxd/sysfs.c
+++ b/drivers/dma/idxd/sysfs.c
@@ -1718,6 +1718,7 @@ static void idxd_conf_device_release(struct device *dev)
kfree(idxd->wqs);
kfree(idxd->engines);
kfree(idxd->evl);
+ kmem_cache_destroy(idxd->evl_cache);
ida_free(&idxd_ida, idxd->id);
bitmap_free(idxd->opcap_bmap);
kfree(idxd);
--
2.37.1


2023-03-06 16:46:07

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 08/16] iommu: define and export iommu_access_remote_vm()

Define and export iommu_access_remote_vm() to allow IOMMU related
drivers to access user address space by PASID.

The IDXD driver would like to use it to write the user's completion
record that the hardware device is not able to write to due to user
page fault.

Without the API, it's complex for IDXD driver to copy completion record
to a process' fault address for two reasons:
1. access_remote_vm() is not exported and shouldn't be exported for
drivers because drivers may easily cause mm reference issue.
2. user frees fault address pages to trigger fault by IDXD device.

The driver has to call iommu_sva_find(), kthread_use_mm(), re-implement
majority of access_remote_vm() etc to access remote vm.

This IOMMU specific API hides these details and provides a clean interface
for idxd driver and potentially other IOMMU related drivers.

Suggested-by: Alistair Popple <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Alistair Popple <[email protected]>
Cc: Lorenzo Stoakes <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: [email protected]
---
v2:
- Define and export iommu_access_remote_vm() for IDXD driver to write
completion record to user address space. This change removes
patch 8 and 9 in v1 (Alistair Popple)

drivers/iommu/iommu-sva.c | 35 +++++++++++++++++++++++++++++++++++
include/linux/iommu.h | 9 +++++++++
2 files changed, 44 insertions(+)

diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
index 24bf9b2b58aa..1d7a0aee58f7 100644
--- a/drivers/iommu/iommu-sva.c
+++ b/drivers/iommu/iommu-sva.c
@@ -71,6 +71,41 @@ struct mm_struct *iommu_sva_find(ioasid_t pasid)
}
EXPORT_SYMBOL_GPL(iommu_sva_find);

+/**
+ * iommu_access_remote_vm - access another process' address space by PASID
+ * @pasid: Process Address Space ID assigned to the mm
+ * @addr: start address to access
+ * @buf: source or destination buffer
+ * @len: number of bytes to transfer
+ * @gup_flags: flags modifying lookup behaviour
+ *
+ * Another process' address space is found by PASID. A reference on @mm
+ * is taken and released inside the function.
+ *
+ * Return: number of bytes copied from source to destination.
+ */
+int iommu_access_remote_vm(ioasid_t pasid, unsigned long addr, void *buf,
+ int len, unsigned int gup_flags)
+{
+ struct mm_struct *mm;
+ int copied;
+
+ mm = iommu_sva_find(pasid);
+ if (IS_ERR_OR_NULL(mm))
+ return 0;
+
+ /*
+ * A reference on @mm has been held by mmget_not_zero()
+ * during iommu_sva_find().
+ */
+ copied = access_remote_vm(mm, addr, buf, len, gup_flags);
+ /* The reference is released. */
+ mmput(mm);
+
+ return copied;
+}
+EXPORT_SYMBOL_GPL(iommu_access_remote_vm);
+
/**
* iommu_sva_bind_device() - Bind a process address space to a device
* @dev: the device
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 6595454d4f48..414a46a53799 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -1177,6 +1177,8 @@ struct iommu_sva *iommu_sva_bind_device(struct device *dev,
struct mm_struct *mm);
void iommu_sva_unbind_device(struct iommu_sva *handle);
u32 iommu_sva_get_pasid(struct iommu_sva *handle);
+int iommu_access_remote_vm(ioasid_t pasid, unsigned long addr, void *buf,
+ int len, unsigned int gup_flags);
#else
static inline struct iommu_sva *
iommu_sva_bind_device(struct device *dev, struct mm_struct *mm)
@@ -1192,6 +1194,13 @@ static inline u32 iommu_sva_get_pasid(struct iommu_sva *handle)
{
return IOMMU_PASID_INVALID;
}
+
+static inline int iommu_access_remote_vm(ioasid_t pasid, unsigned long addr,
+ void *buf, int len,
+ unsigned int gup_flags)
+{
+ return 0;
+}
#endif /* CONFIG_IOMMU_SVA */

#endif /* __LINUX_IOMMU_H */
--
2.37.1


2023-03-06 16:46:11

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 06/16] dmaengine: idxd: add per DSA wq workqueue for processing cr faults

From: Dave Jiang <[email protected]>

Add a workqueue for user submitted completion record fault processing.
The workqueue creation and destruction lifetime will be tied to the user
sub-driver since it will only be used when the wq is a user type.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
drivers/dma/idxd/cdev.c | 11 +++++++++++
drivers/dma/idxd/idxd.h | 1 +
2 files changed, 12 insertions(+)

diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
index 674bfefca088..cbe29e1a6a44 100644
--- a/drivers/dma/idxd/cdev.c
+++ b/drivers/dma/idxd/cdev.c
@@ -330,6 +330,13 @@ static int idxd_user_drv_probe(struct idxd_dev *idxd_dev)
}

mutex_lock(&wq->wq_lock);
+
+ wq->wq = create_workqueue(dev_name(wq_confdev(wq)));
+ if (!wq->wq) {
+ rc = -ENOMEM;
+ goto wq_err;
+ }
+
wq->type = IDXD_WQT_USER;
rc = drv_enable_wq(wq);
if (rc < 0)
@@ -348,7 +355,9 @@ static int idxd_user_drv_probe(struct idxd_dev *idxd_dev)
err_cdev:
drv_disable_wq(wq);
err:
+ destroy_workqueue(wq->wq);
wq->type = IDXD_WQT_NONE;
+wq_err:
mutex_unlock(&wq->wq_lock);
return rc;
}
@@ -361,6 +370,8 @@ static void idxd_user_drv_remove(struct idxd_dev *idxd_dev)
idxd_wq_del_cdev(wq);
drv_disable_wq(wq);
wq->type = IDXD_WQT_NONE;
+ destroy_workqueue(wq->wq);
+ wq->wq = NULL;
mutex_unlock(&wq->wq_lock);
}

diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
index b923b90b7299..6e56361ae658 100644
--- a/drivers/dma/idxd/idxd.h
+++ b/drivers/dma/idxd/idxd.h
@@ -185,6 +185,7 @@ struct idxd_wq {
struct idxd_dev idxd_dev;
struct idxd_cdev *idxd_cdev;
struct wait_queue_head err_queue;
+ struct workqueue_struct *wq;
struct idxd_device *idxd;
int id;
struct idxd_irq_entry ie;
--
2.37.1


2023-03-06 16:46:14

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 10/16] dmaengine: idxd: add descs_completed field for completion record

From: Dave Jiang <[email protected]>

The descs_completed field for a completion record is part of a batch
descriptor completion record. It takes the same location as bytes_completed
in a normal descriptor field. Add to expose to user.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
include/uapi/linux/idxd.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/idxd.h b/include/uapi/linux/idxd.h
index 4b584d5afd87..76ad71bf751e 100644
--- a/include/uapi/linux/idxd.h
+++ b/include/uapi/linux/idxd.h
@@ -314,7 +314,10 @@ struct dsa_completion_record {
};
uint8_t fault_info;
uint8_t rsvd;
- uint32_t bytes_completed;
+ union {
+ uint32_t bytes_completed;
+ uint32_t descs_completed;
+ };
uint64_t fault_addr;
union {
/* common record */
--
2.37.1


2023-03-06 16:46:17

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 04/16] dmaengine: idxd: add interrupt handling for event log

From: Dave Jiang <[email protected]>

An event log interrupt is raised in the misc interrupt INTCAUSE register
when an event is written by the hardware. Add basic event log processing
support to the interrupt handler. The event log is a ring where the
hardware owns the tail and the software owns the head. The hardware will
advance the tail index when an additional event has been pushed to memory.
The software will process the log entry and then advances the head. The
log is full when (tail + 1) % log_size = head. The hardware will stop
writing when the log is full. The user is expected to create a log size
large enough to handle all the expected events.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
drivers/dma/idxd/irq.c | 48 ++++++++++++++++++++++++++++++++++++
drivers/dma/idxd/registers.h | 19 ++++++++++++++
include/uapi/linux/idxd.h | 1 +
3 files changed, 68 insertions(+)

diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
index 0d639303b515..52b8b7d9db22 100644
--- a/drivers/dma/idxd/irq.c
+++ b/drivers/dma/idxd/irq.c
@@ -217,6 +217,49 @@ static void idxd_int_handle_revoke(struct work_struct *work)
kfree(revoke);
}

+static void process_evl_entry(struct idxd_device *idxd, struct __evl_entry *entry_head)
+{
+ struct device *dev = &idxd->pdev->dev;
+ u8 status;
+
+ status = DSA_COMP_STATUS(entry_head->error);
+ dev_warn_ratelimited(dev, "Device error %#x operation: %#x fault addr: %#llx\n",
+ status, entry_head->operation, entry_head->fault_addr);
+}
+
+static void process_evl_entries(struct idxd_device *idxd)
+{
+ union evl_status_reg evl_status;
+ unsigned int h, t;
+ struct idxd_evl *evl = idxd->evl;
+ struct __evl_entry *entry_head;
+ unsigned int ent_size = evl_ent_size(idxd);
+ u32 size;
+
+ evl_status.bits = 0;
+ evl_status.int_pending = 1;
+
+ spin_lock(&evl->lock);
+ /* Clear interrupt pending bit */
+ iowrite32(evl_status.bits_upper32,
+ idxd->reg_base + IDXD_EVLSTATUS_OFFSET + sizeof(u32));
+ h = evl->head;
+ evl_status.bits = ioread64(idxd->reg_base + IDXD_EVLSTATUS_OFFSET);
+ t = evl_status.tail;
+ size = idxd->evl->size;
+
+ while (h != t) {
+ entry_head = (struct __evl_entry *)(evl->log + (h * ent_size));
+ process_evl_entry(idxd, entry_head);
+ h = (h + 1) % size;
+ }
+
+ evl->head = h;
+ evl_status.head = h;
+ iowrite32(evl_status.bits_lower32, idxd->reg_base + IDXD_EVLSTATUS_OFFSET);
+ spin_unlock(&evl->lock);
+}
+
irqreturn_t idxd_misc_thread(int vec, void *data)
{
struct idxd_irq_entry *irq_entry = data;
@@ -304,6 +347,11 @@ irqreturn_t idxd_misc_thread(int vec, void *data)
perfmon_counter_overflow(idxd);
}

+ if (cause & IDXD_INTC_EVL) {
+ val |= IDXD_INTC_EVL;
+ process_evl_entries(idxd);
+ }
+
val ^= cause;
if (val)
dev_warn_once(dev, "Unexpected interrupt cause bits set: %#x\n",
diff --git a/drivers/dma/idxd/registers.h b/drivers/dma/idxd/registers.h
index 11bb97cf7481..148db94f9373 100644
--- a/drivers/dma/idxd/registers.h
+++ b/drivers/dma/idxd/registers.h
@@ -168,6 +168,7 @@ enum idxd_device_reset_type {
#define IDXD_INTC_OCCUPY 0x04
#define IDXD_INTC_PERFMON_OVFL 0x08
#define IDXD_INTC_HALT_STATE 0x10
+#define IDXD_INTC_EVL 0x20
#define IDXD_INTC_INT_HANDLE_REVOKED 0x80000000

#define IDXD_CMD_OFFSET 0xa0
@@ -558,6 +559,24 @@ union filter_cfg {
u64 val;
} __packed;

+#define IDXD_EVLSTATUS_OFFSET 0xf0
+
+union evl_status_reg {
+ struct {
+ u32 head:16;
+ u32 rsvd:16;
+ u32 tail:16;
+ u32 rsvd2:14;
+ u32 int_pending:1;
+ u32 rsvd3:1;
+ };
+ struct {
+ u32 bits_lower32;
+ u32 bits_upper32;
+ };
+ u64 bits;
+} __packed;
+
struct __evl_entry {
u64 rsvd:2;
u64 desc_valid:1;
diff --git a/include/uapi/linux/idxd.h b/include/uapi/linux/idxd.h
index 5d05bf12f2bd..0bc8eea18586 100644
--- a/include/uapi/linux/idxd.h
+++ b/include/uapi/linux/idxd.h
@@ -170,6 +170,7 @@ enum iax_completion_status {

#define DSA_COMP_STATUS_MASK 0x7f
#define DSA_COMP_STATUS_WRITE 0x80
+#define DSA_COMP_STATUS(status) ((status) & DSA_COMP_STATUS_MASK)

struct dsa_hw_desc {
uint32_t pasid:20;
--
2.37.1


2023-03-06 16:46:23

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 09/16] dmaengine: idxd: process user page faults for completion record

From: Dave Jiang <[email protected]>

DSA supports page fault handling through PRS. However, the DMA engine
that's processing the descriptor is blocked until the PRS response is
received. Other workqueues sharing the engine are also blocked.
Page fault handing by the driver with PRS disabled can be used to
mitigate the stalling.

With PRS disabled while ATS remain enabled, DSA handles page faults on
a completion record by reporting an event in the event log. In this
instance, the descriptor is completed and the event log contains the
completion record address and the contents of the completion record. Add
support to the event log handling code to fault in the completion record
and copy the content of the completion record to user memory.

A bitmap is introduced to keep track of discarded event log entries. When
the user process initiates ->release() of the char device, it no longer is
interested in any remaining event log entries tied to the relevant wq and
PASID. The driver will mark the event log entry index in the bitmap. Upon
encountering the entries during processing, the event log handler will just
clear the bitmap bit and skip the entry rather than attempt to process the
event log entry.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
v2:
- Call iommu_access_remote_vm() to copy completion record to user.

drivers/dma/Kconfig | 1 +
drivers/dma/idxd/cdev.c | 34 ++++++++++++++++-
drivers/dma/idxd/device.c | 22 ++++++++++-
drivers/dma/idxd/idxd.h | 2 +
drivers/dma/idxd/init.c | 2 +
drivers/dma/idxd/irq.c | 79 ++++++++++++++++++++++++++++++++++++---
include/uapi/linux/idxd.h | 1 +
7 files changed, 133 insertions(+), 8 deletions(-)

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index fb7073fc034f..c8a2d255930e 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -297,6 +297,7 @@ config INTEL_IDXD
depends on PCI_PASID
depends on SBITMAP
select DMA_ENGINE
+ select IOMMU_SVA
help
Enable support for the Intel(R) data accelerators present
in Intel Xeon CPU.
diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
index cbe29e1a6a44..51a5b8ab160e 100644
--- a/drivers/dma/idxd/cdev.c
+++ b/drivers/dma/idxd/cdev.c
@@ -136,6 +136,35 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)
return rc;
}

+static void idxd_cdev_evl_drain_pasid(struct idxd_wq *wq, u32 pasid)
+{
+ struct idxd_device *idxd = wq->idxd;
+ struct idxd_evl *evl = idxd->evl;
+ union evl_status_reg status;
+ u16 h, t, size;
+ int ent_size = evl_ent_size(idxd);
+ struct __evl_entry *entry_head;
+
+ if (!evl)
+ return;
+
+ spin_lock(&evl->lock);
+ status.bits = ioread64(idxd->reg_base + IDXD_EVLSTATUS_OFFSET);
+ t = status.tail;
+ h = evl->head;
+ size = evl->size;
+
+ while (h != t) {
+ entry_head = (struct __evl_entry *)(evl->log + (h * ent_size));
+ if (entry_head->pasid == pasid && entry_head->wq_idx == wq->id)
+ set_bit(h, evl->bmap);
+ h = (h + 1) % size;
+ }
+ spin_unlock(&evl->lock);
+
+ drain_workqueue(wq->wq);
+}
+
static int idxd_cdev_release(struct inode *node, struct file *filep)
{
struct idxd_user_context *ctx = filep->private_data;
@@ -161,8 +190,11 @@ static int idxd_cdev_release(struct inode *node, struct file *filep)
}
}

- if (ctx->sva)
+ if (ctx->sva) {
+ idxd_cdev_evl_drain_pasid(wq, ctx->pasid);
iommu_sva_unbind_device(ctx->sva);
+ }
+
kfree(ctx);
mutex_lock(&wq->wq_lock);
idxd_wq_put(wq);
diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
index 230fe9bb56ae..fd97b2b58734 100644
--- a/drivers/dma/idxd/device.c
+++ b/drivers/dma/idxd/device.c
@@ -762,18 +762,29 @@ static int idxd_device_evl_setup(struct idxd_device *idxd)
dma_addr_t dma_addr;
int size;
struct idxd_evl *evl = idxd->evl;
+ unsigned long *bmap;
+ int rc;

if (!evl)
return 0;

size = evl_size(idxd);
+
+ bmap = bitmap_zalloc(size, GFP_KERNEL);
+ if (!bmap) {
+ rc = -ENOMEM;
+ goto err_bmap;
+ }
+
/*
* Address needs to be page aligned. However, dma_alloc_coherent() provides
* at minimal page size aligned address. No manual alignment required.
*/
addr = dma_alloc_coherent(dev, size, &dma_addr, GFP_KERNEL);
- if (!addr)
- return -ENOMEM;
+ if (!addr) {
+ rc = -ENOMEM;
+ goto err_alloc;
+ }

memset(addr, 0, size);

@@ -781,6 +792,7 @@ static int idxd_device_evl_setup(struct idxd_device *idxd)
evl->log = addr;
evl->dma = dma_addr;
evl->log_size = size;
+ evl->bmap = bmap;

memset(&evlcfg, 0, sizeof(evlcfg));
evlcfg.bits[0] = dma_addr & GENMASK(63, 12);
@@ -799,6 +811,11 @@ static int idxd_device_evl_setup(struct idxd_device *idxd)

spin_unlock(&evl->lock);
return 0;
+
+err_alloc:
+ bitmap_free(bmap);
+err_bmap:
+ return rc;
}

static void idxd_device_evl_free(struct idxd_device *idxd)
@@ -824,6 +841,7 @@ static void idxd_device_evl_free(struct idxd_device *idxd)
iowrite64(0, idxd->reg_base + IDXD_EVLCFG_OFFSET + 8);

dma_free_coherent(dev, evl->log_size, evl->log, evl->dma);
+ bitmap_free(evl->bmap);
evl->log = NULL;
evl->size = IDXD_EVL_SIZE_MIN;
spin_unlock(&evl->lock);
diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
index c5d99c179902..48cdfd5ee44f 100644
--- a/drivers/dma/idxd/idxd.h
+++ b/drivers/dma/idxd/idxd.h
@@ -260,6 +260,7 @@ struct idxd_driver_data {
struct device_type *dev_type;
int compl_size;
int align;
+ int evl_cr_off;
};

struct idxd_evl {
@@ -272,6 +273,7 @@ struct idxd_evl {
/* The number of entries in the event log. */
u16 size;
u16 head;
+ unsigned long *bmap;
};

struct idxd_evl_fault {
diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
index a7c98fac7a85..19324fbc238c 100644
--- a/drivers/dma/idxd/init.c
+++ b/drivers/dma/idxd/init.c
@@ -47,6 +47,7 @@ static struct idxd_driver_data idxd_driver_data[] = {
.compl_size = sizeof(struct dsa_completion_record),
.align = 32,
.dev_type = &dsa_device_type,
+ .evl_cr_off = offsetof(struct dsa_evl_entry, cr),
},
[IDXD_TYPE_IAX] = {
.name_prefix = "iax",
@@ -54,6 +55,7 @@ static struct idxd_driver_data idxd_driver_data[] = {
.compl_size = sizeof(struct iax_completion_record),
.align = 64,
.dev_type = &iax_device_type,
+ .evl_cr_off = offsetof(struct iax_evl_entry, cr),
},
};

diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
index 52b8b7d9db22..24c688f0ca9e 100644
--- a/drivers/dma/idxd/irq.c
+++ b/drivers/dma/idxd/irq.c
@@ -7,6 +7,9 @@
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/dmaengine.h>
#include <linux/delay.h>
+#include <linux/ioasid.h>
+#include <linux/iommu.h>
+#include <linux/sched/mm.h>
#include <uapi/linux/idxd.h>
#include "../dmaengine.h"
#include "idxd.h"
@@ -217,14 +220,80 @@ static void idxd_int_handle_revoke(struct work_struct *work)
kfree(revoke);
}

-static void process_evl_entry(struct idxd_device *idxd, struct __evl_entry *entry_head)
+static void idxd_evl_fault_work(struct work_struct *work)
+{
+ struct idxd_evl_fault *fault = container_of(work, struct idxd_evl_fault, work);
+ struct idxd_wq *wq = fault->wq;
+ struct idxd_device *idxd = wq->idxd;
+ struct device *dev = &idxd->pdev->dev;
+ struct __evl_entry *entry_head = fault->entry;
+ void *cr = (void *)entry_head + idxd->data->evl_cr_off;
+ int cr_size = idxd->data->compl_size, copied;
+
+ switch (fault->status) {
+ case DSA_COMP_CRA_XLAT:
+ case DSA_COMP_DRAIN_EVL:
+ /*
+ * Copy completion record to user address space that is found
+ * by PASID.
+ */
+ copied = iommu_access_remote_vm(entry_head->pasid,
+ entry_head->fault_addr, cr,
+ cr_size, FOLL_WRITE);
+ if (copied != cr_size) {
+ dev_err(dev, "Failed to write to completion record. (%d:%d)\n",
+ cr_size, copied);
+ }
+ break;
+ default:
+ dev_err(dev, "Unrecognized error code: %#x\n",
+ DSA_COMP_STATUS(entry_head->error));
+ break;
+ }
+
+ kmem_cache_free(idxd->evl_cache, fault);
+}
+
+static void process_evl_entry(struct idxd_device *idxd,
+ struct __evl_entry *entry_head, unsigned int index)
{
struct device *dev = &idxd->pdev->dev;
+ struct idxd_evl *evl = idxd->evl;
u8 status;

- status = DSA_COMP_STATUS(entry_head->error);
- dev_warn_ratelimited(dev, "Device error %#x operation: %#x fault addr: %#llx\n",
- status, entry_head->operation, entry_head->fault_addr);
+ if (test_bit(index, evl->bmap)) {
+ clear_bit(index, evl->bmap);
+ } else {
+ status = DSA_COMP_STATUS(entry_head->error);
+
+ if (status == DSA_COMP_CRA_XLAT || status == DSA_COMP_DRAIN_EVL) {
+ struct idxd_evl_fault *fault;
+ int ent_size = evl_ent_size(idxd);
+
+ if (entry_head->rci)
+ dev_dbg(dev, "Completion Int Req set, ignoring!\n");
+
+ if (!entry_head->rcr && status == DSA_COMP_DRAIN_EVL)
+ return;
+
+ fault = kmem_cache_alloc(idxd->evl_cache, GFP_ATOMIC);
+ if (fault) {
+ struct idxd_wq *wq = idxd->wqs[entry_head->wq_idx];
+
+ fault->wq = wq;
+ fault->status = status;
+ memcpy(&fault->entry, entry_head, ent_size);
+ INIT_WORK(&fault->work, idxd_evl_fault_work);
+ queue_work(wq->wq, &fault->work);
+ } else {
+ dev_warn(dev, "Failed to service fault work.\n");
+ }
+ } else {
+ dev_warn_ratelimited(dev, "Device error %#x operation: %#x fault addr: %#llx\n",
+ status, entry_head->operation,
+ entry_head->fault_addr);
+ }
+ }
}

static void process_evl_entries(struct idxd_device *idxd)
@@ -250,7 +319,7 @@ static void process_evl_entries(struct idxd_device *idxd)

while (h != t) {
entry_head = (struct __evl_entry *)(evl->log + (h * ent_size));
- process_evl_entry(idxd, entry_head);
+ process_evl_entry(idxd, entry_head, h);
h = (h + 1) % size;
}

diff --git a/include/uapi/linux/idxd.h b/include/uapi/linux/idxd.h
index e86199d09a91..4b584d5afd87 100644
--- a/include/uapi/linux/idxd.h
+++ b/include/uapi/linux/idxd.h
@@ -135,6 +135,7 @@ enum dsa_completion_status {
DSA_COMP_HW_ERR1,
DSA_COMP_HW_ERR_DRB,
DSA_COMP_TRANSLATION_FAIL,
+ DSA_COMP_DRAIN_EVL = 0x26,
};

enum iax_completion_status {
--
2.37.1


2023-03-06 16:46:26

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 11/16] dmaengine: idxd: process batch descriptor completion record faults

From: Dave Jiang <[email protected]>

Add event log processing for faulting of user batch descriptor completion
record.

When encountering an event log entry for a page fault on a completion
record, the driver is expected to do the following:
1. If the "first error in batch" bit in event log entry error info is
set, discard any previously recorded errors associated with the
"batch identifier".
2. Fix the page fault according to the fault address in the event log. If
successful, write the completion record to the fault address in user space.
3. If an error is encountered while writing the completion record and it is
associated to a descriptor in the batch, the driver associates the error
with the batch identifier of the event log entry and tracks it until the
event log entry for the corresponding batch desc is encountered.

While processing an event log entry for a batch descriptor with error
indicating that one or more descs in the batch had event log entries,
the driver will do the following before writing the batch completion
record:
1. If the status field of the completion record is 0x1, the driver will
change it to error code 0x5 (one or more operations in batch completed
with status not successful) and changes the result field to 1.
2. If the status is error code 0x6 (page fault on batch descriptor list
address), change the result field to 1.
3. If status is any other value, the completion record is not changed.
4. Clear the recorded error in preparation for next batch with same batch
identifier.

The result field is for user software to determine whether to set the
"Batch Error" flag bit in the descriptor for continuation of partial
batch descriptor completion. See DSA spec 2.0 for additional information.

If no error has been recorded for the batch, the batch completion record is
written to user space as is.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
v2:
- Call iommu_access_remote_vm() to copy completion record to user.

drivers/dma/idxd/idxd.h | 3 ++
drivers/dma/idxd/init.c | 4 ++
drivers/dma/idxd/irq.c | 74 ++++++++++++++++++++++++++++--------
drivers/dma/idxd/registers.h | 4 +-
include/uapi/linux/idxd.h | 1 +
5 files changed, 70 insertions(+), 16 deletions(-)

diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
index 48cdfd5ee44f..8f3b0fbd04ae 100644
--- a/drivers/dma/idxd/idxd.h
+++ b/drivers/dma/idxd/idxd.h
@@ -261,6 +261,8 @@ struct idxd_driver_data {
int compl_size;
int align;
int evl_cr_off;
+ int cr_status_off;
+ int cr_result_off;
};

struct idxd_evl {
@@ -274,6 +276,7 @@ struct idxd_evl {
u16 size;
u16 head;
unsigned long *bmap;
+ bool batch_fail[IDXD_MAX_BATCH_IDENT];
};

struct idxd_evl_fault {
diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
index 19324fbc238c..73fb9c74ed20 100644
--- a/drivers/dma/idxd/init.c
+++ b/drivers/dma/idxd/init.c
@@ -48,6 +48,8 @@ static struct idxd_driver_data idxd_driver_data[] = {
.align = 32,
.dev_type = &dsa_device_type,
.evl_cr_off = offsetof(struct dsa_evl_entry, cr),
+ .cr_status_off = offsetof(struct dsa_completion_record, status),
+ .cr_result_off = offsetof(struct dsa_completion_record, result),
},
[IDXD_TYPE_IAX] = {
.name_prefix = "iax",
@@ -56,6 +58,8 @@ static struct idxd_driver_data idxd_driver_data[] = {
.align = 64,
.dev_type = &iax_device_type,
.evl_cr_off = offsetof(struct iax_evl_entry, cr),
+ .cr_status_off = offsetof(struct iax_completion_record, status),
+ .cr_result_off = offsetof(struct iax_completion_record, error_code),
},
};

diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
index 24c688f0ca9e..894a73e56cb6 100644
--- a/drivers/dma/idxd/irq.c
+++ b/drivers/dma/idxd/irq.c
@@ -226,28 +226,71 @@ static void idxd_evl_fault_work(struct work_struct *work)
struct idxd_wq *wq = fault->wq;
struct idxd_device *idxd = wq->idxd;
struct device *dev = &idxd->pdev->dev;
+ struct idxd_evl *evl = idxd->evl;
struct __evl_entry *entry_head = fault->entry;
void *cr = (void *)entry_head + idxd->data->evl_cr_off;
- int cr_size = idxd->data->compl_size, copied;
+ int cr_size = idxd->data->compl_size;
+ u8 *status = (u8 *)cr + idxd->data->cr_status_off;
+ u8 *result = (u8 *)cr + idxd->data->cr_result_off;
+ int copied, copy_size;
+ bool *bf;

switch (fault->status) {
case DSA_COMP_CRA_XLAT:
- case DSA_COMP_DRAIN_EVL:
- /*
- * Copy completion record to user address space that is found
- * by PASID.
- */
- copied = iommu_access_remote_vm(entry_head->pasid,
- entry_head->fault_addr, cr,
- cr_size, FOLL_WRITE);
- if (copied != cr_size) {
- dev_err(dev, "Failed to write to completion record. (%d:%d)\n",
- cr_size, copied);
+ if (entry_head->batch && entry_head->first_err_in_batch)
+ evl->batch_fail[entry_head->batch_id] = false;
+
+ copy_size = cr_size;
+ break;
+ case DSA_COMP_BATCH_EVL_ERR:
+ bf = &evl->batch_fail[entry_head->batch_id];
+
+ copy_size = entry_head->rcr || *bf ? cr_size : 0;
+ if (*bf) {
+ if (*status == DSA_COMP_SUCCESS)
+ *status = DSA_COMP_BATCH_FAIL;
+ *result = 1;
+ *bf = false;
}
break;
+ case DSA_COMP_DRAIN_EVL:
+ copy_size = cr_size;
+ break;
default:
- dev_err(dev, "Unrecognized error code: %#x\n",
- DSA_COMP_STATUS(entry_head->error));
+ copy_size = 0;
+ dev_err(dev, "Unrecognized error code: %#x\n", fault->status);
+ break;
+ }
+
+ if (copy_size == 0)
+ return;
+
+ /*
+ * Copy completion record to user address space that is found by PASID.
+ */
+ copied = iommu_access_remote_vm(entry_head->pasid,
+ entry_head->fault_addr,
+ cr, copy_size, FOLL_WRITE);
+
+ switch (fault->status) {
+ case DSA_COMP_CRA_XLAT:
+ if (copied != copy_size) {
+ dev_err(dev, "Failed to write to completion record: (%d:%d)\n",
+ copy_size, copied);
+ if (entry_head->batch)
+ evl->batch_fail[entry_head->batch_id] = true;
+ }
+ break;
+ case DSA_COMP_BATCH_EVL_ERR:
+ if (copied != copy_size) {
+ dev_err(dev, "Failed to write to batch completion record: (%d:%d)\n",
+ copy_size, copied);
+ }
+ break;
+ case DSA_COMP_DRAIN_EVL:
+ if (copied != copy_size)
+ dev_err(dev, "Failed to write to drain completion record: (%d:%d)\n",
+ copy_size, copied);
break;
}

@@ -266,7 +309,8 @@ static void process_evl_entry(struct idxd_device *idxd,
} else {
status = DSA_COMP_STATUS(entry_head->error);

- if (status == DSA_COMP_CRA_XLAT || status == DSA_COMP_DRAIN_EVL) {
+ if (status == DSA_COMP_CRA_XLAT || status == DSA_COMP_DRAIN_EVL ||
+ status == DSA_COMP_BATCH_EVL_ERR) {
struct idxd_evl_fault *fault;
int ent_size = evl_ent_size(idxd);

diff --git a/drivers/dma/idxd/registers.h b/drivers/dma/idxd/registers.h
index 148db94f9373..9f3959d001b6 100644
--- a/drivers/dma/idxd/registers.h
+++ b/drivers/dma/idxd/registers.h
@@ -35,7 +35,7 @@ union gen_cap_reg {
u64 drain_readback:1;
u64 rsvd2:3;
u64 evl_support:2;
- u64 rsvd4:1;
+ u64 batch_continuation:1;
u64 max_xfer_shift:5;
u64 max_batch_shift:4;
u64 max_ims_mult:6;
@@ -577,6 +577,8 @@ union evl_status_reg {
u64 bits;
} __packed;

+#define IDXD_MAX_BATCH_IDENT 256
+
struct __evl_entry {
u64 rsvd:2;
u64 desc_valid:1;
diff --git a/include/uapi/linux/idxd.h b/include/uapi/linux/idxd.h
index 76ad71bf751e..606b52e88ce3 100644
--- a/include/uapi/linux/idxd.h
+++ b/include/uapi/linux/idxd.h
@@ -136,6 +136,7 @@ enum dsa_completion_status {
DSA_COMP_HW_ERR_DRB,
DSA_COMP_TRANSLATION_FAIL,
DSA_COMP_DRAIN_EVL = 0x26,
+ DSA_COMP_BATCH_EVL_ERR,
};

enum iax_completion_status {
--
2.37.1


2023-03-06 16:46:29

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 13/16] dmaengine: idxd: add a device to represent the file opened

From: Dave Jiang <[email protected]>

Embed a struct device for the user file context in order to export sysfs
attributes related with the opened file. Tie the lifetime of the file
context to the device. The sysfs entry will be added under the char device.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
drivers/dma/idxd/cdev.c | 122 +++++++++++++++++++++++++++++++---------
drivers/dma/idxd/idxd.h | 2 +
2 files changed, 97 insertions(+), 27 deletions(-)

diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
index 3ce134afa867..2ff9a680ea5e 100644
--- a/drivers/dma/idxd/cdev.c
+++ b/drivers/dma/idxd/cdev.c
@@ -22,6 +22,13 @@ struct idxd_cdev_context {
struct ida minor_ida;
};

+/*
+ * Since user file names are global in DSA devices, define their ida's as
+ * global to avoid conflict file names.
+ */
+static DEFINE_IDA(file_ida);
+static DEFINE_MUTEX(ida_lock);
+
/*
* ictx is an array based off of accelerator types. enum idxd_type
* is used as index
@@ -37,7 +44,60 @@ struct idxd_user_context {
unsigned int pasid;
unsigned int flags;
struct iommu_sva *sva;
+ struct idxd_dev idxd_dev;
u64 counters[COUNTER_MAX];
+ int id;
+};
+
+static void idxd_cdev_evl_drain_pasid(struct idxd_wq *wq, u32 pasid);
+static void idxd_xa_pasid_remove(struct idxd_user_context *ctx);
+
+static inline struct idxd_user_context *dev_to_uctx(struct device *dev)
+{
+ struct idxd_dev *idxd_dev = confdev_to_idxd_dev(dev);
+
+ return container_of(idxd_dev, struct idxd_user_context, idxd_dev);
+}
+
+static void idxd_file_dev_release(struct device *dev)
+{
+ struct idxd_user_context *ctx = dev_to_uctx(dev);
+ struct idxd_wq *wq = ctx->wq;
+ struct idxd_device *idxd = wq->idxd;
+ int rc;
+
+ mutex_lock(&ida_lock);
+ ida_free(&file_ida, ctx->id);
+ mutex_unlock(&ida_lock);
+
+ /* Wait for in-flight operations to complete. */
+ if (wq_shared(wq)) {
+ idxd_device_drain_pasid(idxd, ctx->pasid);
+ } else {
+ if (device_user_pasid_enabled(idxd)) {
+ /* The wq disable in the disable pasid function will drain the wq */
+ rc = idxd_wq_disable_pasid(wq);
+ if (rc < 0)
+ dev_err(dev, "wq disable pasid failed.\n");
+ } else {
+ idxd_wq_drain(wq);
+ }
+ }
+
+ if (ctx->sva) {
+ idxd_cdev_evl_drain_pasid(wq, ctx->pasid);
+ iommu_sva_unbind_device(ctx->sva);
+ idxd_xa_pasid_remove(ctx);
+ }
+ kfree(ctx);
+ mutex_lock(&wq->wq_lock);
+ idxd_wq_put(wq);
+ mutex_unlock(&wq->wq_lock);
+}
+
+static struct device_type idxd_cdev_file_type = {
+ .name = "idxd_file",
+ .release = idxd_file_dev_release,
};

static void idxd_cdev_dev_release(struct device *dev)
@@ -105,10 +165,11 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)
struct idxd_user_context *ctx;
struct idxd_device *idxd;
struct idxd_wq *wq;
- struct device *dev;
+ struct device *dev, *fdev;
int rc = 0;
struct iommu_sva *sva;
unsigned int pasid;
+ struct idxd_cdev *idxd_cdev;

wq = inode_wq(inode);
idxd = wq->idxd;
@@ -163,10 +224,41 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)
}
}

+ idxd_cdev = wq->idxd_cdev;
+ mutex_lock(&ida_lock);
+ ctx->id = ida_alloc(&file_ida, GFP_KERNEL);
+ mutex_unlock(&ida_lock);
+ if (ctx->id < 0) {
+ dev_warn(dev, "ida alloc failure\n");
+ goto failed_ida;
+ }
+ ctx->idxd_dev.type = IDXD_DEV_CDEV_FILE;
+ fdev = user_ctx_dev(ctx);
+ device_initialize(fdev);
+ fdev->parent = cdev_dev(idxd_cdev);
+ fdev->bus = &dsa_bus_type;
+ fdev->type = &idxd_cdev_file_type;
+
+ rc = dev_set_name(fdev, "file%d", ctx->id);
+ if (rc < 0) {
+ dev_warn(dev, "set name failure\n");
+ goto failed_dev_name;
+ }
+
+ rc = device_add(fdev);
+ if (rc < 0) {
+ dev_warn(dev, "file device add failure\n");
+ goto failed_dev_add;
+ }
+
idxd_wq_get(wq);
mutex_unlock(&wq->wq_lock);
return 0;

+ failed_dev_add:
+ failed_dev_name:
+ put_device(fdev);
+ failed_ida:
failed_set_pasid:
if (device_user_pasid_enabled(idxd))
idxd_xa_pasid_remove(ctx);
@@ -214,35 +306,10 @@ static int idxd_cdev_release(struct inode *node, struct file *filep)
struct idxd_wq *wq = ctx->wq;
struct idxd_device *idxd = wq->idxd;
struct device *dev = &idxd->pdev->dev;
- int rc;

dev_dbg(dev, "%s called\n", __func__);
filep->private_data = NULL;
-
- /* Wait for in-flight operations to complete. */
- if (wq_shared(wq)) {
- idxd_device_drain_pasid(idxd, ctx->pasid);
- } else {
- if (device_user_pasid_enabled(idxd)) {
- /* The wq disable in the disable pasid function will drain the wq */
- rc = idxd_wq_disable_pasid(wq);
- if (rc < 0)
- dev_err(dev, "wq disable pasid failed.\n");
- } else {
- idxd_wq_drain(wq);
- }
- }
-
- if (ctx->sva) {
- idxd_cdev_evl_drain_pasid(wq, ctx->pasid);
- iommu_sva_unbind_device(ctx->sva);
- idxd_xa_pasid_remove(ctx);
- }
-
- kfree(ctx);
- mutex_lock(&wq->wq_lock);
- idxd_wq_put(wq);
- mutex_unlock(&wq->wq_lock);
+ device_unregister(user_ctx_dev(ctx));
return 0;
}

@@ -373,6 +440,7 @@ void idxd_wq_del_cdev(struct idxd_wq *wq)
struct idxd_cdev *idxd_cdev;

idxd_cdev = wq->idxd_cdev;
+ ida_destroy(&file_ida);
wq->idxd_cdev = NULL;
cdev_device_del(&idxd_cdev->cdev, cdev_dev(idxd_cdev));
put_device(cdev_dev(idxd_cdev));
diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
index f92db20015fb..b5d7ef611bae 100644
--- a/drivers/dma/idxd/idxd.h
+++ b/drivers/dma/idxd/idxd.h
@@ -32,6 +32,7 @@ enum idxd_dev_type {
IDXD_DEV_GROUP,
IDXD_DEV_ENGINE,
IDXD_DEV_CDEV,
+ IDXD_DEV_CDEV_FILE,
IDXD_DEV_MAX_TYPE,
};

@@ -405,6 +406,7 @@ enum idxd_completion_status {
#define engine_confdev(engine) &engine->idxd_dev.conf_dev
#define group_confdev(group) &group->idxd_dev.conf_dev
#define cdev_dev(cdev) &cdev->idxd_dev.conf_dev
+#define user_ctx_dev(ctx) (&(ctx)->idxd_dev.conf_dev)

#define confdev_to_idxd_dev(dev) container_of(dev, struct idxd_dev, conf_dev)
#define idxd_dev_to_idxd(idxd_dev) container_of(idxd_dev, struct idxd_device, idxd_dev)
--
2.37.1


2023-03-06 16:46:32

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 14/16] dmaengine: idxd: expose fault counters to sysfs

From: Dave Jiang <[email protected]>

Expose cr_faults and cr_fault_failures counters to the user space. This
allows a user app to keep track of how many fault the application is
causing with the completion record (CR) and also the number of failures
of the CR writeback. Having a high number of cr_fault_failures is bad as
the app is submitting descriptors with the CR addresses that are bad. User
monitoring daemon may want to consider killing the application as it may be
malicious and attempting to flood the device event log.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
.../ABI/stable/sysfs-driver-dma-idxd | 17 +++++++
drivers/dma/idxd/cdev.c | 46 +++++++++++++++++++
2 files changed, 63 insertions(+)

diff --git a/Documentation/ABI/stable/sysfs-driver-dma-idxd b/Documentation/ABI/stable/sysfs-driver-dma-idxd
index e01916611452..73ab86196a41 100644
--- a/Documentation/ABI/stable/sysfs-driver-dma-idxd
+++ b/Documentation/ABI/stable/sysfs-driver-dma-idxd
@@ -318,3 +318,20 @@ Description: Allows control of the number of batch descriptors that can be
1 (1/2 of max value), 2 (1/4 of the max value), and 3 (1/8 of
the max value). It's visible only on platforms that support
the capability.
+
+What: /sys/bus/dsa/devices/wq<m>.<n>/dsa<x>\!wq<m>.<n>/file<y>/cr_faults
+Date: Sept 14, 2022
+KernelVersion: 6.4.0
+Contact: [email protected]
+Description: Show the number of Completion Record (CR) faults this application
+ has caused.
+
+What: /sys/bus/dsa/devices/wq<m>.<n>/dsa<x>\!wq<m>.<n>/file<y>/cr_fault_failures
+Date: Sept 14, 2022
+KernelVersion: 6.4.0
+Contact: [email protected]
+Description: Show the number of Completion Record (CR) faults failures that this
+ application has caused. The failure counter is incremented when the
+ driver cannot fault in the address for the CR. Typically this is caused
+ by a bad address programmed in the submitted descriptor or a malicious
+ submitter is using bad CR address on purpose.
diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
index 2ff9a680ea5e..9fc6565bf807 100644
--- a/drivers/dma/idxd/cdev.c
+++ b/drivers/dma/idxd/cdev.c
@@ -59,6 +59,51 @@ static inline struct idxd_user_context *dev_to_uctx(struct device *dev)
return container_of(idxd_dev, struct idxd_user_context, idxd_dev);
}

+static ssize_t cr_faults_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct idxd_user_context *ctx = dev_to_uctx(dev);
+
+ return sysfs_emit(buf, "%llu\n", ctx->counters[COUNTER_FAULTS]);
+}
+static DEVICE_ATTR_RO(cr_faults);
+
+static ssize_t cr_fault_failures_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct idxd_user_context *ctx = dev_to_uctx(dev);
+
+ return sysfs_emit(buf, "%llu\n", ctx->counters[COUNTER_FAULT_FAILS]);
+}
+static DEVICE_ATTR_RO(cr_fault_failures);
+
+static struct attribute *cdev_file_attributes[] = {
+ &dev_attr_cr_faults.attr,
+ &dev_attr_cr_fault_failures.attr,
+ NULL
+};
+
+static umode_t cdev_file_attr_visible(struct kobject *kobj, struct attribute *a, int n)
+{
+ struct device *dev = container_of(kobj, typeof(*dev), kobj);
+ struct idxd_user_context *ctx = dev_to_uctx(dev);
+ struct idxd_wq *wq = ctx->wq;
+
+ if (!wq_pasid_enabled(wq))
+ return 0;
+
+ return a->mode;
+}
+
+static const struct attribute_group cdev_file_attribute_group = {
+ .attrs = cdev_file_attributes,
+ .is_visible = cdev_file_attr_visible,
+};
+
+static const struct attribute_group *cdev_file_attribute_groups[] = {
+ &cdev_file_attribute_group,
+ NULL
+};
+
static void idxd_file_dev_release(struct device *dev)
{
struct idxd_user_context *ctx = dev_to_uctx(dev);
@@ -98,6 +143,7 @@ static void idxd_file_dev_release(struct device *dev)
static struct device_type idxd_cdev_file_type = {
.name = "idxd_file",
.release = idxd_file_dev_release,
+ .groups = cdev_file_attribute_groups,
};

static void idxd_cdev_dev_release(struct device *dev)
--
2.37.1


2023-03-06 16:46:37

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 12/16] dmaengine: idxd: add per file user counters for completion record faults

From: Dave Jiang <[email protected]>

Add counters per opened file for the char device in order to keep track how
many completion record faults occurred and how many of those faults failed
the writeback by the driver after attempt to fault in the page. An xarray
is added to associate the PASID with the struct idxd_user_context so the
counters can be managed.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
drivers/dma/idxd/cdev.c | 50 +++++++++++++++++++++++++++++++++++++---
drivers/dma/idxd/idxd.h | 11 +++++++++
drivers/dma/idxd/init.c | 2 ++
drivers/dma/idxd/irq.c | 4 ++++
drivers/dma/idxd/sysfs.c | 1 +
5 files changed, 65 insertions(+), 3 deletions(-)

diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
index 51a5b8ab160e..3ce134afa867 100644
--- a/drivers/dma/idxd/cdev.c
+++ b/drivers/dma/idxd/cdev.c
@@ -11,6 +11,7 @@
#include <linux/fs.h>
#include <linux/poll.h>
#include <linux/iommu.h>
+#include <linux/xarray.h>
#include <uapi/linux/idxd.h>
#include "registers.h"
#include "idxd.h"
@@ -36,6 +37,7 @@ struct idxd_user_context {
unsigned int pasid;
unsigned int flags;
struct iommu_sva *sva;
+ u64 counters[COUNTER_MAX];
};

static void idxd_cdev_dev_release(struct device *dev)
@@ -68,6 +70,36 @@ static inline struct idxd_wq *inode_wq(struct inode *inode)
return idxd_cdev->wq;
}

+void idxd_user_counter_increment(struct idxd_wq *wq, u32 pasid, int index)
+{
+ struct idxd_user_context *ctx;
+
+ if (index >= COUNTER_MAX)
+ return;
+
+ mutex_lock(&wq->uc_lock);
+ ctx = xa_load(&wq->upasid_xa, pasid);
+ if (!ctx) {
+ mutex_unlock(&wq->uc_lock);
+ return;
+ }
+ ctx->counters[index]++;
+ mutex_unlock(&wq->uc_lock);
+}
+
+static void idxd_xa_pasid_remove(struct idxd_user_context *ctx)
+{
+ struct idxd_wq *wq = ctx->wq;
+ void *ptr;
+
+ mutex_lock(&wq->uc_lock);
+ ptr = xa_cmpxchg(&wq->upasid_xa, ctx->pasid, ctx, NULL, GFP_KERNEL);
+ if (ptr != (void *)ctx)
+ dev_warn(&wq->idxd->pdev->dev, "xarray cmpxchg failed for pasid %u\n",
+ ctx->pasid);
+ mutex_unlock(&wq->uc_lock);
+}
+
static int idxd_cdev_open(struct inode *inode, struct file *filp)
{
struct idxd_user_context *ctx;
@@ -108,20 +140,25 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)

pasid = iommu_sva_get_pasid(sva);
if (pasid == IOMMU_PASID_INVALID) {
- iommu_sva_unbind_device(sva);
rc = -EINVAL;
- goto failed;
+ goto failed_get_pasid;
}

ctx->sva = sva;
ctx->pasid = pasid;

+ mutex_lock(&wq->uc_lock);
+ rc = xa_insert(&wq->upasid_xa, pasid, ctx, GFP_KERNEL);
+ mutex_unlock(&wq->uc_lock);
+ if (rc < 0)
+ dev_warn(dev, "PASID entry already exist in xarray.\n");
+
if (wq_dedicated(wq)) {
rc = idxd_wq_set_pasid(wq, pasid);
if (rc < 0) {
iommu_sva_unbind_device(sva);
dev_err(dev, "wq set pasid failed: %d\n", rc);
- goto failed;
+ goto failed_set_pasid;
}
}
}
@@ -130,6 +167,12 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)
mutex_unlock(&wq->wq_lock);
return 0;

+ failed_set_pasid:
+ if (device_user_pasid_enabled(idxd))
+ idxd_xa_pasid_remove(ctx);
+ failed_get_pasid:
+ if (device_user_pasid_enabled(idxd))
+ iommu_sva_unbind_device(sva);
failed:
mutex_unlock(&wq->wq_lock);
kfree(ctx);
@@ -193,6 +236,7 @@ static int idxd_cdev_release(struct inode *node, struct file *filep)
if (ctx->sva) {
idxd_cdev_evl_drain_pasid(wq, ctx->pasid);
iommu_sva_unbind_device(ctx->sva);
+ idxd_xa_pasid_remove(ctx);
}

kfree(ctx);
diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
index 8f3b0fbd04ae..f92db20015fb 100644
--- a/drivers/dma/idxd/idxd.h
+++ b/drivers/dma/idxd/idxd.h
@@ -127,6 +127,12 @@ struct idxd_pmu {

#define IDXD_MAX_PRIORITY 0xf

+enum {
+ COUNTER_FAULTS = 0,
+ COUNTER_FAULT_FAILS,
+ COUNTER_MAX
+};
+
enum idxd_wq_state {
IDXD_WQ_DISABLED = 0,
IDXD_WQ_ENABLED,
@@ -215,6 +221,10 @@ struct idxd_wq {
char name[WQ_NAME_SIZE + 1];
u64 max_xfer_bytes;
u32 max_batch_size;
+
+ /* Lock to protect upasid_xa access. */
+ struct mutex uc_lock;
+ struct xarray upasid_xa;
};

struct idxd_engine {
@@ -707,6 +717,7 @@ void idxd_cdev_remove(void);
int idxd_cdev_get_major(struct idxd_device *idxd);
int idxd_wq_add_cdev(struct idxd_wq *wq);
void idxd_wq_del_cdev(struct idxd_wq *wq);
+void idxd_user_counter_increment(struct idxd_wq *wq, u32 pasid, int index);

/* perfmon */
#if IS_ENABLED(CONFIG_INTEL_IDXD_PERFMON)
diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
index 73fb9c74ed20..9b3e7f0770d1 100644
--- a/drivers/dma/idxd/init.c
+++ b/drivers/dma/idxd/init.c
@@ -206,6 +206,8 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
}
bitmap_copy(wq->opcap_bmap, idxd->opcap_bmap, IDXD_MAX_OPCAP_BITS);
}
+ mutex_init(&wq->uc_lock);
+ xa_init(&wq->upasid_xa);
idxd->wqs[i] = wq;
}

diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
index 894a73e56cb6..69e0b8e1b3cf 100644
--- a/drivers/dma/idxd/irq.c
+++ b/drivers/dma/idxd/irq.c
@@ -241,6 +241,7 @@ static void idxd_evl_fault_work(struct work_struct *work)
evl->batch_fail[entry_head->batch_id] = false;

copy_size = cr_size;
+ idxd_user_counter_increment(wq, entry_head->pasid, COUNTER_FAULTS);
break;
case DSA_COMP_BATCH_EVL_ERR:
bf = &evl->batch_fail[entry_head->batch_id];
@@ -252,6 +253,7 @@ static void idxd_evl_fault_work(struct work_struct *work)
*result = 1;
*bf = false;
}
+ idxd_user_counter_increment(wq, entry_head->pasid, COUNTER_FAULTS);
break;
case DSA_COMP_DRAIN_EVL:
copy_size = cr_size;
@@ -275,6 +277,7 @@ static void idxd_evl_fault_work(struct work_struct *work)
switch (fault->status) {
case DSA_COMP_CRA_XLAT:
if (copied != copy_size) {
+ idxd_user_counter_increment(wq, entry_head->pasid, COUNTER_FAULT_FAILS);
dev_err(dev, "Failed to write to completion record: (%d:%d)\n",
copy_size, copied);
if (entry_head->batch)
@@ -283,6 +286,7 @@ static void idxd_evl_fault_work(struct work_struct *work)
break;
case DSA_COMP_BATCH_EVL_ERR:
if (copied != copy_size) {
+ idxd_user_counter_increment(wq, entry_head->pasid, COUNTER_FAULT_FAILS);
dev_err(dev, "Failed to write to batch completion record: (%d:%d)\n",
copy_size, copied);
}
diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
index 8b9dfa0d2b99..465d2e7627e4 100644
--- a/drivers/dma/idxd/sysfs.c
+++ b/drivers/dma/idxd/sysfs.c
@@ -1292,6 +1292,7 @@ static void idxd_conf_wq_release(struct device *dev)

bitmap_free(wq->opcap_bmap);
kfree(wq->wqcfg);
+ xa_destroy(&wq->upasid_xa);
kfree(wq);
}

--
2.37.1


2023-03-06 16:46:40

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 15/16] dmaengine: idxd: add pid to exported sysfs attribute for opened file

From: Dave Jiang <[email protected]>

Provide the pid of the application for the opened file. This allows the
monitor daemon to easily correlate which app opened the file and easily
kill the app by pid if that is desired action.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
Documentation/ABI/stable/sysfs-driver-dma-idxd | 8 ++++++++
drivers/dma/idxd/cdev.c | 11 +++++++++++
2 files changed, 19 insertions(+)

diff --git a/Documentation/ABI/stable/sysfs-driver-dma-idxd b/Documentation/ABI/stable/sysfs-driver-dma-idxd
index 73ab86196a41..5d0df57f5298 100644
--- a/Documentation/ABI/stable/sysfs-driver-dma-idxd
+++ b/Documentation/ABI/stable/sysfs-driver-dma-idxd
@@ -335,3 +335,11 @@ Description: Show the number of Completion Record (CR) faults failures that this
driver cannot fault in the address for the CR. Typically this is caused
by a bad address programmed in the submitted descriptor or a malicious
submitter is using bad CR address on purpose.
+
+What: /sys/bus/dsa/devices/wq<m>.<n>/dsa<x>\!wq<m>.<n>/file<y>/pid
+Date: Sept 14, 2022
+KernelVersion: 6.4.0
+Contact: [email protected]
+Description: Show the process id of the application that opened the file. This is
+ helpful information for a monitor daemon that wants to kill the
+ application that opened the file.
diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
index 9fc6565bf807..e124f0628f1c 100644
--- a/drivers/dma/idxd/cdev.c
+++ b/drivers/dma/idxd/cdev.c
@@ -47,6 +47,7 @@ struct idxd_user_context {
struct idxd_dev idxd_dev;
u64 counters[COUNTER_MAX];
int id;
+ pid_t pid;
};

static void idxd_cdev_evl_drain_pasid(struct idxd_wq *wq, u32 pasid);
@@ -76,9 +77,18 @@ static ssize_t cr_fault_failures_show(struct device *dev,
}
static DEVICE_ATTR_RO(cr_fault_failures);

+static ssize_t pid_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct idxd_user_context *ctx = dev_to_uctx(dev);
+
+ return sysfs_emit(buf, "%u\n", ctx->pid);
+}
+static DEVICE_ATTR_RO(pid);
+
static struct attribute *cdev_file_attributes[] = {
&dev_attr_cr_faults.attr,
&dev_attr_cr_fault_failures.attr,
+ &dev_attr_pid.attr,
NULL
};

@@ -236,6 +246,7 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)

ctx->wq = wq;
filp->private_data = ctx;
+ ctx->pid = current->pid;

if (device_user_pasid_enabled(idxd)) {
sva = iommu_sva_bind_device(dev, current->mm);
--
2.37.1


2023-03-06 16:46:43

by Fenghua Yu

[permalink] [raw]
Subject: [PATCH v2 16/16] dmaengine: idxd: add per wq PRS disable

From: Dave Jiang <[email protected]>

Add sysfs knob for per wq Page Request Service disable. This knob
disables PRS support for the specific wq. When this bit is set,
it also overrides the wq's block on fault enabling.

Tested-by: Tony Zhu <[email protected]>
Signed-off-by: Dave Jiang <[email protected]>
Co-developed-by: Fenghua Yu <[email protected]>
Signed-off-by: Fenghua Yu <[email protected]>
---
.../ABI/stable/sysfs-driver-dma-idxd | 10 ++++
drivers/dma/idxd/device.c | 6 +-
drivers/dma/idxd/idxd.h | 1 +
drivers/dma/idxd/registers.h | 5 +-
drivers/dma/idxd/sysfs.c | 57 ++++++++++++++++++-
5 files changed, 74 insertions(+), 5 deletions(-)

diff --git a/Documentation/ABI/stable/sysfs-driver-dma-idxd b/Documentation/ABI/stable/sysfs-driver-dma-idxd
index 5d0df57f5298..534b7a3d59fc 100644
--- a/Documentation/ABI/stable/sysfs-driver-dma-idxd
+++ b/Documentation/ABI/stable/sysfs-driver-dma-idxd
@@ -235,6 +235,16 @@ Contact: [email protected]
Description: Indicate whether ATS disable is turned on for the workqueue.
0 indicates ATS is on, and 1 indicates ATS is off for the workqueue.

+What: /sys/bus/dsa/devices/wq<m>.<n>/prs_disable
+Date: Sept 14, 2022
+KernelVersion: 6.4.0
+Contact: [email protected]
+Description: Controls whether PRS disable is turned on for the workqueue.
+ 0 indicates PRS is on, and 1 indicates PRS is off for the
+ workqueue. This option overrides block_on_fault attribute
+ if set. It's visible only on platforms that support the
+ capability.
+
What: /sys/bus/dsa/devices/wq<m>.<n>/occupancy
Date May 25, 2021
KernelVersion: 5.14.0
diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
index fd97b2b58734..3c80b9681c72 100644
--- a/drivers/dma/idxd/device.c
+++ b/drivers/dma/idxd/device.c
@@ -967,12 +967,16 @@ static int idxd_wq_config_write(struct idxd_wq *wq)
wq->wqcfg->priority = wq->priority;

if (idxd->hw.gen_cap.block_on_fault &&
- test_bit(WQ_FLAG_BLOCK_ON_FAULT, &wq->flags))
+ test_bit(WQ_FLAG_BLOCK_ON_FAULT, &wq->flags) &&
+ !test_bit(WQ_FLAG_PRS_DISABLE, &wq->flags))
wq->wqcfg->bof = 1;

if (idxd->hw.wq_cap.wq_ats_support)
wq->wqcfg->wq_ats_disable = test_bit(WQ_FLAG_ATS_DISABLE, &wq->flags);

+ if (idxd->hw.wq_cap.wq_prs_support)
+ wq->wqcfg->wq_prs_disable = test_bit(WQ_FLAG_PRS_DISABLE, &wq->flags);
+
/* bytes 12-15 */
wq->wqcfg->max_xfer_shift = ilog2(wq->max_xfer_bytes);
idxd_wqcfg_set_max_batch_shift(idxd->data->type, wq->wqcfg, ilog2(wq->max_batch_size));
diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
index b5d7ef611bae..3a20e4933d07 100644
--- a/drivers/dma/idxd/idxd.h
+++ b/drivers/dma/idxd/idxd.h
@@ -143,6 +143,7 @@ enum idxd_wq_flag {
WQ_FLAG_DEDICATED = 0,
WQ_FLAG_BLOCK_ON_FAULT,
WQ_FLAG_ATS_DISABLE,
+ WQ_FLAG_PRS_DISABLE,
};

enum idxd_wq_type {
diff --git a/drivers/dma/idxd/registers.h b/drivers/dma/idxd/registers.h
index 9f3959d001b6..7b54a3939ea1 100644
--- a/drivers/dma/idxd/registers.h
+++ b/drivers/dma/idxd/registers.h
@@ -59,7 +59,8 @@ union wq_cap_reg {
u64 occupancy:1;
u64 occupancy_int:1;
u64 op_config:1;
- u64 rsvd3:9;
+ u64 wq_prs_support:1;
+ u64 rsvd4:8;
};
u64 bits;
} __packed;
@@ -371,7 +372,7 @@ union wqcfg {
u32 mode:1; /* shared or dedicated */
u32 bof:1; /* block on fault */
u32 wq_ats_disable:1;
- u32 rsvd2:1;
+ u32 wq_prs_disable:1;
u32 priority:4;
u32 pasid:20;
u32 pasid_en:1;
diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
index 465d2e7627e4..293739ac5596 100644
--- a/drivers/dma/idxd/sysfs.c
+++ b/drivers/dma/idxd/sysfs.c
@@ -822,10 +822,14 @@ static ssize_t wq_block_on_fault_store(struct device *dev,
if (rc < 0)
return rc;

- if (bof)
+ if (bof) {
+ if (test_bit(WQ_FLAG_PRS_DISABLE, &wq->flags))
+ return -EOPNOTSUPP;
+
set_bit(WQ_FLAG_BLOCK_ON_FAULT, &wq->flags);
- else
+ } else {
clear_bit(WQ_FLAG_BLOCK_ON_FAULT, &wq->flags);
+ }

return count;
}
@@ -1109,6 +1113,44 @@ static ssize_t wq_ats_disable_store(struct device *dev, struct device_attribute
static struct device_attribute dev_attr_wq_ats_disable =
__ATTR(ats_disable, 0644, wq_ats_disable_show, wq_ats_disable_store);

+static ssize_t wq_prs_disable_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct idxd_wq *wq = confdev_to_wq(dev);
+
+ return sysfs_emit(buf, "%u\n", test_bit(WQ_FLAG_PRS_DISABLE, &wq->flags));
+}
+
+static ssize_t wq_prs_disable_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct idxd_wq *wq = confdev_to_wq(dev);
+ struct idxd_device *idxd = wq->idxd;
+ bool prs_dis;
+ int rc;
+
+ if (wq->state != IDXD_WQ_DISABLED)
+ return -EPERM;
+
+ if (!idxd->hw.wq_cap.wq_prs_support)
+ return -EOPNOTSUPP;
+
+ rc = kstrtobool(buf, &prs_dis);
+ if (rc < 0)
+ return rc;
+
+ if (prs_dis) {
+ set_bit(WQ_FLAG_PRS_DISABLE, &wq->flags);
+ /* when PRS is disabled, BOF needs to be off as well */
+ clear_bit(WQ_FLAG_BLOCK_ON_FAULT, &wq->flags);
+ } else {
+ clear_bit(WQ_FLAG_PRS_DISABLE, &wq->flags);
+ }
+ return count;
+}
+
+static struct device_attribute dev_attr_wq_prs_disable =
+ __ATTR(prs_disable, 0644, wq_prs_disable_show, wq_prs_disable_store);
+
static ssize_t wq_occupancy_show(struct device *dev, struct device_attribute *attr, char *buf)
{
struct idxd_wq *wq = confdev_to_wq(dev);
@@ -1239,6 +1281,7 @@ static struct attribute *idxd_wq_attributes[] = {
&dev_attr_wq_max_transfer_size.attr,
&dev_attr_wq_max_batch_size.attr,
&dev_attr_wq_ats_disable.attr,
+ &dev_attr_wq_prs_disable.attr,
&dev_attr_wq_occupancy.attr,
&dev_attr_wq_enqcmds_retries.attr,
&dev_attr_wq_op_config.attr,
@@ -1260,6 +1303,13 @@ static bool idxd_wq_attr_max_batch_size_invisible(struct attribute *attr,
idxd->data->type == IDXD_TYPE_IAX;
}

+static bool idxd_wq_attr_wq_prs_disable_invisible(struct attribute *attr,
+ struct idxd_device *idxd)
+{
+ return attr == &dev_attr_wq_prs_disable.attr &&
+ !idxd->hw.wq_cap.wq_prs_support;
+}
+
static umode_t idxd_wq_attr_visible(struct kobject *kobj,
struct attribute *attr, int n)
{
@@ -1273,6 +1323,9 @@ static umode_t idxd_wq_attr_visible(struct kobject *kobj,
if (idxd_wq_attr_max_batch_size_invisible(attr, idxd))
return 0;

+ if (idxd_wq_attr_wq_prs_disable_invisible(attr, idxd))
+ return 0;
+
return attr->mode;
}

--
2.37.1


2023-03-07 01:42:52

by Baolu Lu

[permalink] [raw]
Subject: Re: [PATCH v2 08/16] iommu: define and export iommu_access_remote_vm()

On 3/7/23 12:31 AM, Fenghua Yu wrote:
> Define and export iommu_access_remote_vm() to allow IOMMU related
> drivers to access user address space by PASID.
>
> The IDXD driver would like to use it to write the user's completion
> record that the hardware device is not able to write to due to user
> page fault.

I don't quite follow here. Isn't I/O page fault already supported?

Best regards,
baolu

2023-03-07 08:41:09

by Jean-Philippe Brucker

[permalink] [raw]
Subject: Re: [PATCH v2 08/16] iommu: define and export iommu_access_remote_vm()

Hi Fenghua,

On Mon, Mar 06, 2023 at 08:31:30AM -0800, Fenghua Yu wrote:
> Define and export iommu_access_remote_vm() to allow IOMMU related
> drivers to access user address space by PASID.
>
> The IDXD driver would like to use it to write the user's completion
> record that the hardware device is not able to write to due to user
> page fault.
>
> Without the API, it's complex for IDXD driver to copy completion record
> to a process' fault address for two reasons:
> 1. access_remote_vm() is not exported and shouldn't be exported for
> drivers because drivers may easily cause mm reference issue.
> 2. user frees fault address pages to trigger fault by IDXD device.
>
> The driver has to call iommu_sva_find(), kthread_use_mm(), re-implement
> majority of access_remote_vm() etc to access remote vm.
>
> This IOMMU specific API hides these details and provides a clean interface
> for idxd driver and potentially other IOMMU related drivers.
>
> Suggested-by: Alistair Popple <[email protected]>
> Signed-off-by: Fenghua Yu <[email protected]>
> Cc: Joerg Roedel <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Robin Murphy <[email protected]>
> Cc: Alistair Popple <[email protected]>
> Cc: Lorenzo Stoakes <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: [email protected]
> ---
> v2:
> - Define and export iommu_access_remote_vm() for IDXD driver to write
> completion record to user address space. This change removes
> patch 8 and 9 in v1 (Alistair Popple)
>
> drivers/iommu/iommu-sva.c | 35 +++++++++++++++++++++++++++++++++++
> include/linux/iommu.h | 9 +++++++++
> 2 files changed, 44 insertions(+)
>
> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
> index 24bf9b2b58aa..1d7a0aee58f7 100644
> --- a/drivers/iommu/iommu-sva.c
> +++ b/drivers/iommu/iommu-sva.c
> @@ -71,6 +71,41 @@ struct mm_struct *iommu_sva_find(ioasid_t pasid)
> }
> EXPORT_SYMBOL_GPL(iommu_sva_find);
>
> +/**
> + * iommu_access_remote_vm - access another process' address space by PASID
> + * @pasid: Process Address Space ID assigned to the mm
> + * @addr: start address to access
> + * @buf: source or destination buffer
> + * @len: number of bytes to transfer
> + * @gup_flags: flags modifying lookup behaviour
> + *
> + * Another process' address space is found by PASID. A reference on @mm
> + * is taken and released inside the function.
> + *
> + * Return: number of bytes copied from source to destination.
> + */
> +int iommu_access_remote_vm(ioasid_t pasid, unsigned long addr, void *buf,
> + int len, unsigned int gup_flags)
> +{
> + struct mm_struct *mm;
> + int copied;
> +
> + mm = iommu_sva_find(pasid);

The ability to find a mm by PASID is being removed, see
https://lore.kernel.org/linux-iommu/[email protected]/

Thanks,
Jean

> + if (IS_ERR_OR_NULL(mm))
> + return 0;
> +
> + /*
> + * A reference on @mm has been held by mmget_not_zero()
> + * during iommu_sva_find().
> + */
> + copied = access_remote_vm(mm, addr, buf, len, gup_flags);
> + /* The reference is released. */
> + mmput(mm);
> +
> + return copied;
> +}
> +EXPORT_SYMBOL_GPL(iommu_access_remote_vm);
> +
> /**
> * iommu_sva_bind_device() - Bind a process address space to a device
> * @dev: the device
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 6595454d4f48..414a46a53799 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -1177,6 +1177,8 @@ struct iommu_sva *iommu_sva_bind_device(struct device *dev,
> struct mm_struct *mm);
> void iommu_sva_unbind_device(struct iommu_sva *handle);
> u32 iommu_sva_get_pasid(struct iommu_sva *handle);
> +int iommu_access_remote_vm(ioasid_t pasid, unsigned long addr, void *buf,
> + int len, unsigned int gup_flags);
> #else
> static inline struct iommu_sva *
> iommu_sva_bind_device(struct device *dev, struct mm_struct *mm)
> @@ -1192,6 +1194,13 @@ static inline u32 iommu_sva_get_pasid(struct iommu_sva *handle)
> {
> return IOMMU_PASID_INVALID;
> }
> +
> +static inline int iommu_access_remote_vm(ioasid_t pasid, unsigned long addr,
> + void *buf, int len,
> + unsigned int gup_flags)
> +{
> + return 0;
> +}
> #endif /* CONFIG_IOMMU_SVA */
>
> #endif /* __LINUX_IOMMU_H */
> --
> 2.37.1
>
>

2023-03-07 16:36:28

by Fenghua Yu

[permalink] [raw]
Subject: Re: [PATCH v2 08/16] iommu: define and export iommu_access_remote_vm()

Hi, Jean,

On 3/7/23 00:40, Jean-Philippe Brucker wrote:
> Hi Fenghua,
>
> On Mon, Mar 06, 2023 at 08:31:30AM -0800, Fenghua Yu wrote:
>> Define and export iommu_access_remote_vm() to allow IOMMU related
>> drivers to access user address space by PASID.
>>
>> The IDXD driver would like to use it to write the user's completion
>> record that the hardware device is not able to write to due to user
>> page fault.
>>
>> Without the API, it's complex for IDXD driver to copy completion record
>> to a process' fault address for two reasons:
>> 1. access_remote_vm() is not exported and shouldn't be exported for
>> drivers because drivers may easily cause mm reference issue.
>> 2. user frees fault address pages to trigger fault by IDXD device.
>>
>> The driver has to call iommu_sva_find(), kthread_use_mm(), re-implement
>> majority of access_remote_vm() etc to access remote vm.
>>
>> This IOMMU specific API hides these details and provides a clean interface
>> for idxd driver and potentially other IOMMU related drivers.
>>
>> Suggested-by: Alistair Popple <[email protected]>
>> Signed-off-by: Fenghua Yu <[email protected]>
>> Cc: Joerg Roedel <[email protected]>
>> Cc: Will Deacon <[email protected]>
>> Cc: Robin Murphy <[email protected]>
>> Cc: Alistair Popple <[email protected]>
>> Cc: Lorenzo Stoakes <[email protected]>
>> Cc: Christoph Hellwig <[email protected]>
>> Cc: [email protected]
>> ---
>> v2:
>> - Define and export iommu_access_remote_vm() for IDXD driver to write
>> completion record to user address space. This change removes
>> patch 8 and 9 in v1 (Alistair Popple)
>>
>> drivers/iommu/iommu-sva.c | 35 +++++++++++++++++++++++++++++++++++
>> include/linux/iommu.h | 9 +++++++++
>> 2 files changed, 44 insertions(+)
>>
>> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
>> index 24bf9b2b58aa..1d7a0aee58f7 100644
>> --- a/drivers/iommu/iommu-sva.c
>> +++ b/drivers/iommu/iommu-sva.c
>> @@ -71,6 +71,41 @@ struct mm_struct *iommu_sva_find(ioasid_t pasid)
>> }
>> EXPORT_SYMBOL_GPL(iommu_sva_find);
>>
>> +/**
>> + * iommu_access_remote_vm - access another process' address space by PASID
>> + * @pasid: Process Address Space ID assigned to the mm
>> + * @addr: start address to access
>> + * @buf: source or destination buffer
>> + * @len: number of bytes to transfer
>> + * @gup_flags: flags modifying lookup behaviour
>> + *
>> + * Another process' address space is found by PASID. A reference on @mm
>> + * is taken and released inside the function.
>> + *
>> + * Return: number of bytes copied from source to destination.
>> + */
>> +int iommu_access_remote_vm(ioasid_t pasid, unsigned long addr, void *buf,
>> + int len, unsigned int gup_flags)
>> +{
>> + struct mm_struct *mm;
>> + int copied;
>> +
>> + mm = iommu_sva_find(pasid);
>
> The ability to find a mm by PASID is being removed, see
> https://lore.kernel.org/linux-iommu/[email protected]/
>

Thank you very much for pointing out this.

I talked to Jacob just now. He will keep iommu_sva_find() function
in his next version because this patch is still using the function. He
agrees that I can still call iommu_sva_find() in this patch.

-Fenghua

2023-03-07 18:03:12

by Fenghua Yu

[permalink] [raw]
Subject: Re: [PATCH v2 08/16] iommu: define and export iommu_access_remote_vm()

Hi, Baolu,

On 3/6/23 17:41, Baolu Lu wrote:
> On 3/7/23 12:31 AM, Fenghua Yu wrote:
>> Define and export iommu_access_remote_vm() to allow IOMMU related
>> drivers to access user address space by PASID.
>>
>> The IDXD driver would like to use it to write the user's completion
>> record that the hardware device is not able to write to due to user
>> page fault.
>
> I don't quite follow here. Isn't I/O page fault already supported?

The following patch 9 in this series explains in details why IDXD device
cannot use page fault to write to user memory:
https://lore.kernel.org/dmaengine/[email protected]/

"DSA supports page fault handling through PRS. However, the DMA engine
that's processing the descriptor is blocked until the PRS response is
received. Other workqueues sharing the engine are also blocked.
Page fault handing by the driver with PRS disabled can be used to
mitigate the stalling.

With PRS disabled while ATS remain enabled, DSA handles page faults on
a completion record by reporting an event in the event log. In this
instance, the descriptor is completed and the event log contains the
completion record address and the contents of the completion record."

That's why IDXD driver needs this IOMMU's helper
iommu_access_remote_vm() to copy the completion record from event log
buffer to user space.

Thanks.

-Fenghua

2023-03-08 02:24:57

by Baolu Lu

[permalink] [raw]
Subject: Re: [PATCH v2 08/16] iommu: define and export iommu_access_remote_vm()

On 3/8/23 1:55 AM, Fenghua Yu wrote:
> Hi, Baolu,
>
> On 3/6/23 17:41, Baolu Lu wrote:
>> On 3/7/23 12:31 AM, Fenghua Yu wrote:
>>> Define and export iommu_access_remote_vm() to allow IOMMU related
>>> drivers to access user address space by PASID.
>>>
>>> The IDXD driver would like to use it to write the user's completion
>>> record that the hardware device is not able to write to due to user
>>> page fault.
>>
>> I don't quite follow here. Isn't I/O page fault already supported?
>
> The following patch 9 in this series explains in details why IDXD device
> cannot use page fault to write to user memory:
> https://lore.kernel.org/dmaengine/[email protected]/
>
> "DSA supports page fault handling through PRS. However, the DMA engine
> that's processing the descriptor is blocked until the PRS response is
> received. Other workqueues sharing the engine are also blocked.
> Page fault handing by the driver with PRS disabled can be used to
> mitigate the stalling.

Ah! Get your point now. Thanks for the explanation.

>
> With PRS disabled while ATS remain enabled, DSA handles page faults on
> a completion record by reporting an event in the event log. In this
> instance, the descriptor is completed and the event log contains the
> completion record address and the contents of the completion record."
>
> That's why IDXD driver needs this IOMMU's helper
> iommu_access_remote_vm() to copy the completion record from event log
> buffer to user space.
>
> Thanks.
>
> -Fenghua

Best regards,
baolu

2023-03-11 17:31:29

by Fenghua Yu

[permalink] [raw]
Subject: Re: [PATCH v2 08/16] iommu: define and export iommu_access_remote_vm()

Hi, Jean and Jacob,

On 3/7/23 08:33, Fenghua Yu wrote:
> Hi, Jean,
>
> On 3/7/23 00:40, Jean-Philippe Brucker wrote:
>> Hi Fenghua,
>>
>> On Mon, Mar 06, 2023 at 08:31:30AM -0800, Fenghua Yu wrote:
>>> Define and export iommu_access_remote_vm() to allow IOMMU related
>>> drivers to access user address space by PASID.
>>>
>>> The IDXD driver would like to use it to write the user's completion
>>> record that the hardware device is not able to write to due to user
>>> page fault.
>>>
>>> Without the API, it's complex for IDXD driver to copy completion record
>>> to a process' fault address for two reasons:
>>> 1. access_remote_vm() is not exported and shouldn't be exported for
>>>     drivers because drivers may easily cause mm reference issue.
>>> 2. user frees fault address pages to trigger fault by IDXD device.
>>>
>>> The driver has to call iommu_sva_find(), kthread_use_mm(), re-implement
>>> majority of access_remote_vm() etc to access remote vm.
>>>
>>> This IOMMU specific API hides these details and provides a clean
>>> interface
>>> for idxd driver and potentially other IOMMU related drivers.
>>>
>>> Suggested-by: Alistair Popple <[email protected]>
>>> Signed-off-by: Fenghua Yu <[email protected]>
>>> Cc: Joerg Roedel <[email protected]>
>>> Cc: Will Deacon <[email protected]>
>>> Cc: Robin Murphy <[email protected]>
>>> Cc: Alistair Popple <[email protected]>
>>> Cc: Lorenzo Stoakes <[email protected]>
>>> Cc: Christoph Hellwig <[email protected]>
>>> Cc: [email protected]
>>> ---
>>> v2:
>>> - Define and export iommu_access_remote_vm() for IDXD driver to write
>>>    completion record to user address space. This change removes
>>>    patch 8 and 9 in v1 (Alistair Popple)
>>>
>>>   drivers/iommu/iommu-sva.c | 35 +++++++++++++++++++++++++++++++++++
>>>   include/linux/iommu.h     |  9 +++++++++
>>>   2 files changed, 44 insertions(+)
>>>
>>> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
>>> index 24bf9b2b58aa..1d7a0aee58f7 100644
>>> --- a/drivers/iommu/iommu-sva.c
>>> +++ b/drivers/iommu/iommu-sva.c
>>> @@ -71,6 +71,41 @@ struct mm_struct *iommu_sva_find(ioasid_t pasid)
>>>   }
>>>   EXPORT_SYMBOL_GPL(iommu_sva_find);
>>> +/**
>>> + * iommu_access_remote_vm - access another process' address space by
>>> PASID
>>> + * @pasid:    Process Address Space ID assigned to the mm
>>> + * @addr:    start address to access
>>> + * @buf:    source or destination buffer
>>> + * @len:    number of bytes to transfer
>>> + * @gup_flags:    flags modifying lookup behaviour
>>> + *
>>> + * Another process' address space is found by PASID. A reference on @mm
>>> + * is taken and released inside the function.
>>> + *
>>> + * Return: number of bytes copied from source to destination.
>>> + */
>>> +int iommu_access_remote_vm(ioasid_t pasid, unsigned long addr, void
>>> *buf,
>>> +               int len, unsigned int gup_flags)
>>> +{
>>> +    struct mm_struct *mm;
>>> +    int copied;
>>> +
>>> +    mm = iommu_sva_find(pasid);
>>
>> The ability to find a mm by PASID is being removed, see
>> https://lore.kernel.org/linux-iommu/[email protected]/
>>
>>
>
> Thank you very much for pointing out this.
>
> I talked to Jacob just now. He will keep iommu_sva_find() function
> in his next version because this patch is still using the function. He
> agrees that I can still call iommu_sva_find() in this patch.

Further comment from Jason confirms that iommu_sva_find() will be
removed (https://lore.kernel.org/lkml/ZAjSsm4%[email protected]/).

So cannot call iommu_sva_find() any more. Will maintain mm and find mm
from PASID inside IDXD driver. And will implement accessing the remote
mm inside IDXD driver although the implementation will have duplicate
code as access_remote_vm().

Next version will only change IDXD driver code. There won't be IOMMU
code change.

Thanks.

-Fenghua

2023-03-20 13:35:33

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 08/16] iommu: define and export iommu_access_remote_vm()

On Tue, Mar 07, 2023 at 09:55:28AM -0800, Fenghua Yu wrote:
> >
> > I don't quite follow here. Isn't I/O page fault already supported?
>
> The following patch 9 in this series explains in details why IDXD device
> cannot use page fault to write to user memory: https://lore.kernel.org/dmaengine/[email protected]/
>
> "DSA supports page fault handling through PRS. However, the DMA engine
> that's processing the descriptor is blocked until the PRS response is
> received. Other workqueues sharing the engine are also blocked.
> Page fault handing by the driver with PRS disabled can be used to
> mitigate the stalling.
>
> With PRS disabled while ATS remain enabled, DSA handles page faults on
> a completion record by reporting an event in the event log. In this
> instance, the descriptor is completed and the event log contains the
> completion record address and the contents of the completion record."

This seems like a completely broken I/O model, and I don't think Linux
should support this model when it requires operations like
access_remote_vm.

2023-03-31 00:49:34

by Fenghua Yu

[permalink] [raw]
Subject: Re: [PATCH v2 08/16] iommu: define and export iommu_access_remote_vm()

Hi, Christoph,

On 3/20/23 06:35, Christoph Hellwig wrote:
> On Tue, Mar 07, 2023 at 09:55:28AM -0800, Fenghua Yu wrote:
>>>
>>> I don't quite follow here. Isn't I/O page fault already supported?
>>
>> The following patch 9 in this series explains in details why IDXD device
>> cannot use page fault to write to user memory: https://lore.kernel.org/dmaengine/[email protected]/
>>
>> "DSA supports page fault handling through PRS. However, the DMA engine
>> that's processing the descriptor is blocked until the PRS response is
>> received. Other workqueues sharing the engine are also blocked.
>> Page fault handing by the driver with PRS disabled can be used to
>> mitigate the stalling.
>>
>> With PRS disabled while ATS remain enabled, DSA handles page faults on
>> a completion record by reporting an event in the event log. In this
>> instance, the descriptor is completed and the event log contains the
>> completion record address and the contents of the completion record."
>
> This seems like a completely broken I/O model, and I don't think Linux
> should support this model when it requires operations like
> access_remote_vm.

This patch set have two parts:
1. Basic event log support and PRS disabling knob.
2. Completion record page fault fixup part. The current patch is the
major patch in this part. It tries to fix up completion record page
fault from user space.

Since the fix up in part 2 is debatable and part 1 can be sent out
independently, how about we send out the parts separately?

In the new part 1, simply warn on completion record page fault and don't
try to fix it up. Usually a process executes ENQCMD instruction and then
keeps polling completion record periodically. So the completion record
will be likely backed by page and won't generate page fault. If page
fault is really an issue, sysadmin can enable PRS (which is enabled by
default anyway) and let PRS to handle page fault.

Then in the next step, we will send out a new part 2 to eliminate
completion record page fault. One proposal is to mmap() kernel allocated
completion record area. So there won't be completion record page fault
and fix up(no access_remote_vm() of course).

Is this OK for you?

Thank you very much for your comment!

-Fenghua