2023-07-31 22:28:36

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v8 00/14] crypto: Add Intel Analytics Accelerator (IAA) crypto compression driver

Hi, this is v8 of the IAA crypto driver, incorporating feedback from
v7.

v8 changes:

- Rebased to current cryptodev tree.

- Added generic-deflate software fallback for decompression
in cases where there's a hardware failure that would
otherwise prevent the data being recovered.

- Changed the driver_name code to use strim().

- Changed the null-destination cases to use sgl_alloc_order() rather
than sgl_alloc().

- Added more Reviewed-by tags.


v7 changes:

- Rebased to current cryptodev tree.

- Removed 'canned' compression mode (deflate-iaa-canned) and fixed
up dependencies in other patches and Documentation.

- Removed op_block checks.

- Removed a stray debugging #ifdef.

- Changed sysfs-driver-dma-idxd driver_name version to 6.6.0.


v6 changes:

- Rebased to current cryptodev tree.

- Changed code to create/register separate algorithms for each
compression mode - one for 'fixed' (deflate-iaa) and one for
'canned' (deflate-iaa-canned).

- Got rid of 'compression_mode' attribute and all the compression
mode code that deals with 'active' compression modes, since
there's no longer a single active compression mode.

- Use crypto_ctx to allow common compress/decompress code to
distinguish between the different compression modes. Also use it
to capture settings such as verify_compress, use_irq, etc. In
addition to being cleaner, this will allow for easier performance
comparisons using different modes/settings.

- Update Documentation and comments to reflect the changes.

- Fixed a bug found by Rex Zhang in decompress_header() which
unmapped src2 rather than src as it should have. Thanks, Rex!


v5 changes:

- Rebased to current cryptodev tree.

- Changed sysfs-driver-dma-idxd driver_name version to 6.5.0.

- Renamed wq private accessor functions to idxd_wq_set/get_private().

v4 changes:

- Added and used DRIVER_NAME_SIZE for wq driver_name.

- Changed all spaces to tabs in CRYPTO_DEV_IAA_CRYPTO_STATS config
menu.

- Removed the private_data void * from wq and replaced with
wq_confdev() instead, as suggested by Dave Jiang.

- Added more Reviewed-by tags.

v3 changes:

- Reworked the code to only allow the registered crypto alg to be
unregistered by removing the module. Also added an iaa_wq_get()
and iaa_wq_put() to take/give up a reference to the work queue
while there are compresses/decompresses in flight. This is
synchronized with the wq remove function, so that the
iaa_wq/iaa_devices can't go away beneath active operations. This
was tested by removing/disabling the iaa wqs/devices while
operations were in flight.

- Simplified the rebalance code and removed cpu_to_iaa() function
since it was overly complicated and wasn't actually working as
advertised.

- As a result of reworking the above code, fixed several bugs such
as possibly unregistering an unregistered crypto alg, a memory
leak where iaa_wqs weren't being freed, and making sure the
compression schemes were registered before registering the driver.

- Added set_/idxd_wq_private() accessors for wq private data.

- Added missing XPORT_SYMBOL_NS_GPL() to [PATCH 04/15] dmaengine:
idxd: Export descriptor management functions

- Added Dave's Reviewed-by: tags from v2.

- Updated Documentation and commit messages to reflect the changes
above.

- Rebased to to cryptodev tree, since that has the earlier changes
that moved the intel drivers to crypto/intel.

v2 changes:

- Removed legacy interface and all related code; merged async
interface into main deflate patch.

- Added support for the null destination case. Thanks to Giovanni
Cabiddu for making me aware of this as well as the selftests for
it.

- Had to do some rearrangement of the code in order to pass all the
selftests. Also added a testcase for 'canned'.

- Moved the iaa crypto driver to drivers/crypto/intel, and moved all
the other intel drivers there as well (which will be posted as a
separate series immediately following this one).

- Added an iaa crypto section to MAINTAINERS.

- Updated the documenation and commit messages to reflect the removal
of the legacy interface.

- Changed kernel version from 6.3.0 to 6.4.0 in patch 01/15 (wq
driver name support)

v1:

This series adds Linux crypto algorithm support for Intel® In-memory
Analytics Accelerator (Intel IAA) [1] hardware compression and
decompression, which is available on Sapphire Rapids systems.

The IAA crypto support is implemented as an IDXD sub-driver. The IDXD
driver already present in the kernel provides discovery and management
of the IAA devices on a system, as well as all the functionality
needed to manage, submit, and wait for completion of work executed on
them. The first 7 patches (patches starting with dmaengine:) add
small bits of underlying IDXD plumbing needed to allow external
sub-drivers to take advantage of this support and claim ownership of
specific IAA devices and workqueues.

The remaining patches add the main support for this feature via the
crypto API, making it transparently accessible to kernel features that
can make use of it such as zswap and zram (patches starting with
crypto – iaa:).

These include both sync/async support for the deflate algorithm
implemented by the IAA hardware, as well as an additional option for
driver statistics and Documentation.

Patch 8 ('[PATCH 08/15] crypto: iaa - Add IAA Compression Accelerator
Documentation') describes the IAA crypto driver in detail; the
following is just a high-level synopsis meant to aid the following
discussion.

The IAA hardware is fairly complex and generally requires a
knowledgeable administrator with sufficiently detailed understanding
of the hardware to set it up before it can be used. As mentioned in
the Documentation, this typically requires using a special tool called
accel-config to enumerate and configure IAA workqueues, engines, etc,
although this can also be done using only sysfs files.

The operation of the driver mirrors this requirement and only allows
the hardware to be accessed via the crypto layer once the hardware has
been configured and bound to the the IAA crypto driver. As an IDXD
sub-driver, the IAA crypto driver essentially takes ownership of the
hardware until it is given up explicitly by the administrator. This
occurs automatically when the administrator enables the first IAA
workqueue or disables the last one; the iaa_crypto (sync and async)
algorithms are registered when the first workqueue is enabled, and
deregistered when the last one is disabled.

The normal sequence of operations would normally be:

< configure the hardware using accel-config or sysfs >

< configure the iaa crypto driver (see below) >

< configure the subsystem e.g. zswap/zram to use the iaa_crypto algo >

< run the workload >

There are a small number of iaa_crypto driver attributes that the
administrator can configure, and which also need to be configured
before the algorithm is enabled:

compression_mode:

The IAA crypto driver supports an extensible interface supporting
any number of different compression modes that can be tailored to
specific types of workloads. These are implemented as tables and
given arbitrary names describing their intent.

There are currently only 2 compression modes, “canned” and “fixed”.
In order to set a compression mode, echo the mode’s name to the
compression_mode driver attribute:

echo "canned" > /sys/bus/dsa/drivers/crypto/compression_mode

There are a few other available iaa_crypto driver attributes (see
Documentation for details) but the main one we want to consider in
detail for now is the ‘sync_mode’ attribute.

The ‘sync_mode’ attribute has 3 possible settings: ‘sync’, ‘async’,
and ‘async_irq’.

The context for these different modes is that although the iaa_crypto
driver implements the asynchronous crypto interface, the async
interface is currently only used in a synchronous way by facilities
like zswap that make use of it.

This is fine for software compress/decompress algorithms, since
there’s no real benefit in being able to use a truly asynchronous
interface with them. This isn’t the case, though, for hardware
compress/decompress engines such as IAA, where truly asynchronous
behavior is beneficial if not completely necessary to make optimal use
of the hardware.

The IAA crypto driver ‘sync_mode’ support should allow facilities such
as zswap to ‘support async (de)compression in some way [2]’ once
they are modified to actually make use of it.

When the ‘async_irq’ sync_mode is specified, the driver sets the bits
in the IAA work descriptor to generate an irq when the work completes.
So for every compression or decompression, the IAA acomp_alg
implementations called by crypto_acomp_compress/decompress() simply
set up the descriptor, turn on the 'request irq' bit and return
immediately with -EINPROGRESS. When the work completes, the irq fires
and the IDXD driver’s irq thread for that irq invokes the callback the
iaa_crypto module registered with IDXD. When the irq thread gets
scheduled, it wakes up the caller, which could be for instance zswap,
waiting synchronously via crypto_wait_req().

Using the simple madvise test program in '[PATCH 08/15] crypto: iaa -
Add IAA Compression Accelerator Documentation' along with a set of
pages from the spec17 benchmark and tracepoint instrumentation
measuring the time taken between the start and end of each compress
and decompress, this case, async_irq, takes on average 6,847 ns for
compression and 5,840 ns for decompression. (See Table 1 below for a
summary of all the tests.)

When sync_mode is set to ‘sync’, the interrupt bit is not set and the
work descriptor is submitted in the same way it was for the previous
case. In this case the call doesn’t return but rather loops around
waiting in the iaa_crypto driver’s check_completion() function which
continually checks the descriptor’s completion bit until it finds it
set to ‘completed’. It then returns to the caller, again for example
zswap waiting in crypto_wait_req(). From the standpoint of zswap,
this case is exactly the same as the previous case, the difference
seen only in the crypto layer and the iaa_crypto driver internally;
from its standpoint they’re both synchronous calls. There is however
a large performance difference: an average of 3,177 ns for compress
and 2,235 ns for decompress.

The final sync_mode is ‘async’. In this case also the interrupt bit
is not set and the work descriptor is submitted, returning immediately
to the caller with -EINPROGRESS. Because there’s no interrupt set to
notify anyone when the work completes, the caller needs to somehow
check for work completion. Because core code like zswap can’t do this
directly by for example calling iaa_crypto’s check_completion(), there
would need to be some changes made to code like zswap and the crypto
layer in order to take advantage of this mode. As such, there are no
numbers to share for this mode.

Finally, just a quick discussion of the remaining numbers in Table 1,
those comparing the iaa_crypto sync and async irq cases to software
deflate. Software deflate took average of 108,978 ns for compress and
14,485 ns for decompress.

As can be seen from Table 1, the numbers using the iaa_crypto driver
for deflate as compared to software are so much better that merging it
would seem to make sense on its own merits. The 'async' sync_mode
described above, however, offers the possibility of even greater gains
to be had against higher-performing algorithms such as lzo, via
parallelization, once the calling facilities are modified to take
advantage of it. Follow-up patchsets to this one will demonstrate
concretely how that might be accomplished.

Thanks,

Tom


Table 1. Zswap latency and compression numbers (in ns):

Algorithm compress decompress
----------------------------------------------------------
iaa sync 3,177 2,235
iaa async irq 6,847 5,840
software deflate 108,978 14,485

[1] https://cdrdv2.intel.com/v1/dl/getContent/721858

[2] https://lore.kernel.org/lkml/[email protected]/


Dave Jiang (2):
dmaengine: idxd: add wq driver name support for accel-config user tool
dmaengine: idxd: add external module driver support for dsa_bus_type

Tom Zanussi (12):
dmaengine: idxd: Export drv_enable/disable and related functions
dmaengine: idxd: Export descriptor management functions
dmaengine: idxd: Export wq resource management functions
dmaengine: idxd: Add wq private data accessors
dmaengine: idxd: add callback support for iaa crypto
crypto: iaa - Add IAA Compression Accelerator Documentation
crypto: iaa - Add Intel IAA Compression Accelerator crypto driver core
crypto: iaa - Add per-cpu workqueue table with rebalancing
crypto: iaa - Add compression mode management along with fixed mode
crypto: iaa - Add support for deflate-iaa compression algorithm
crypto: iaa - Add irq support for the crypto async interface
crypto: iaa - Add IAA Compression Accelerator stats

.../ABI/stable/sysfs-driver-dma-idxd | 6 +
.../driver-api/crypto/iaa/iaa-crypto.rst | 645 +++++
Documentation/driver-api/crypto/iaa/index.rst | 20 +
Documentation/driver-api/crypto/index.rst | 20 +
Documentation/driver-api/index.rst | 1 +
MAINTAINERS | 7 +
crypto/testmgr.c | 10 +
drivers/crypto/intel/Kconfig | 1 +
drivers/crypto/intel/Makefile | 1 +
drivers/crypto/intel/iaa/Kconfig | 19 +
drivers/crypto/intel/iaa/Makefile | 12 +
drivers/crypto/intel/iaa/iaa_crypto.h | 182 ++
.../crypto/intel/iaa/iaa_crypto_comp_fixed.c | 92 +
drivers/crypto/intel/iaa/iaa_crypto_main.c | 2142 +++++++++++++++++
drivers/crypto/intel/iaa/iaa_crypto_stats.c | 281 +++
drivers/crypto/intel/iaa/iaa_crypto_stats.h | 60 +
drivers/dma/idxd/bus.c | 6 +
drivers/dma/idxd/cdev.c | 7 +
drivers/dma/idxd/device.c | 9 +-
drivers/dma/idxd/dma.c | 9 +-
drivers/dma/idxd/idxd.h | 84 +-
drivers/dma/idxd/irq.c | 12 +-
drivers/dma/idxd/submit.c | 9 +-
drivers/dma/idxd/sysfs.c | 34 +
include/uapi/linux/idxd.h | 1 +
25 files changed, 3650 insertions(+), 20 deletions(-)
create mode 100644 Documentation/driver-api/crypto/iaa/iaa-crypto.rst
create mode 100644 Documentation/driver-api/crypto/iaa/index.rst
create mode 100644 Documentation/driver-api/crypto/index.rst
create mode 100644 drivers/crypto/intel/iaa/Kconfig
create mode 100644 drivers/crypto/intel/iaa/Makefile
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto.h
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto_comp_fixed.c
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto_main.c
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto_stats.c
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto_stats.h

--
2.34.1



2023-07-31 23:17:17

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v8 13/14] crypto: iaa - Add irq support for the crypto async interface

The existing iaa crypto async support provides an implementation that
satisfies the interface but does so in a synchronous manner - it fills
and submits the IDXD descriptor and then waits for it to complete
before returning. This isn't a problem at the moment, since all
existing callers (e.g. zswap) wrap any asynchronous callees in a
synchronous wrapper anyway.

This change makes the iaa crypto async implementation truly
asynchronous: it fills and submits the IDXD descriptor, then returns
immediately with -EINPROGRESS. It also sets the descriptor's 'request
completion irq' bit and sets up a callback with the IDXD driver which
is called when the operation completes and the irq fires. The
existing callers such as zswap use synchronous wrappers to deal with
-EINPROGRESS and so work as expected without any changes.

This mode can be enabled by writing 'async_irq' to the sync_mode
iaa_crypto driver attribute:

echo async_irq > /sys/bus/dsa/drivers/crypto/sync_mode

Async mode without interrupts (caller must poll) can be enabled by
writing 'async' to it:

echo async > /sys/bus/dsa/drivers/crypto/sync_mode

The default sync mode can be enabled by writing 'sync' to it:

echo sync > /sys/bus/dsa/drivers/crypto/sync_mode

The sync_mode value setting at the time the IAA algorithms are
registered is captured in each algorithm's crypto_ctx and used for all
compresses and decompresses when using a given algorithm.

Signed-off-by: Tom Zanussi <[email protected]>
---
drivers/crypto/intel/iaa/iaa_crypto.h | 2 +
drivers/crypto/intel/iaa/iaa_crypto_main.c | 250 ++++++++++++++++++++-
2 files changed, 250 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h
index 4c6b0f5a6b50..de014ac53adb 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto.h
+++ b/drivers/crypto/intel/iaa/iaa_crypto.h
@@ -153,6 +153,8 @@ enum iaa_mode {
struct iaa_compression_ctx {
enum iaa_mode mode;
bool verify_compress;
+ bool async_mode;
+ bool use_irq;
};

#endif
diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
index 75c4846cb4ff..17568c9440c2 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
+++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
@@ -119,6 +119,102 @@ static ssize_t verify_compress_store(struct device_driver *driver,
}
static DRIVER_ATTR_RW(verify_compress);

+/*
+ * The iaa crypto driver supports three 'sync' methods determining how
+ * compressions and decompressions are performed:
+ *
+ * - sync: the compression or decompression completes before
+ * returning. This is the mode used by the async crypto
+ * interface when the sync mode is set to 'sync' and by
+ * the sync crypto interface regardless of setting.
+ *
+ * - async: the compression or decompression is submitted and returns
+ * immediately. Completion interrupts are not used so
+ * the caller is responsible for polling the descriptor
+ * for completion. This mode is applicable to only the
+ * async crypto interface and is ignored for anything
+ * else.
+ *
+ * - async_irq: the compression or decompression is submitted and
+ * returns immediately. Completion interrupts are
+ * enabled so the caller can wait for the completion and
+ * yield to other threads. When the compression or
+ * decompression completes, the completion is signaled
+ * and the caller awakened. This mode is applicable to
+ * only the async crypto interface and is ignored for
+ * anything else.
+ *
+ * These modes can be set using the iaa_crypto sync_mode driver
+ * attribute.
+ */
+
+/* Use async mode */
+static bool async_mode;
+/* Use interrupts */
+static bool use_irq;
+
+/**
+ * set_iaa_sync_mode - Set IAA sync mode
+ * @name: The name of the sync mode
+ *
+ * Make the IAA sync mode named @name the current sync mode used by
+ * compression/decompression.
+ */
+
+static int set_iaa_sync_mode(const char *name)
+{
+ int ret = 0;
+
+ if (sysfs_streq(name, "sync")) {
+ async_mode = false;
+ use_irq = false;
+ } else if (sysfs_streq(name, "async")) {
+ async_mode = true;
+ use_irq = false;
+ } else if (sysfs_streq(name, "async_irq")) {
+ async_mode = true;
+ use_irq = true;
+ } else {
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static ssize_t sync_mode_show(struct device_driver *driver, char *buf)
+{
+ int ret = 0;
+
+ if (!async_mode && !use_irq)
+ ret = sprintf(buf, "%s\n", "sync");
+ else if (async_mode && !use_irq)
+ ret = sprintf(buf, "%s\n", "async");
+ else if (async_mode && use_irq)
+ ret = sprintf(buf, "%s\n", "async_irq");
+
+ return ret;
+}
+
+static ssize_t sync_mode_store(struct device_driver *driver,
+ const char *buf, size_t count)
+{
+ int ret = -EBUSY;
+
+ mutex_lock(&iaa_devices_lock);
+
+ if (iaa_crypto_enabled)
+ goto out;
+
+ ret = set_iaa_sync_mode(buf);
+ if (ret == 0)
+ ret = count;
+out:
+ mutex_unlock(&iaa_devices_lock);
+
+ return ret;
+}
+static DRIVER_ATTR_RW(sync_mode);
+
static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX];

static int find_empty_iaa_compression_mode(void)
@@ -997,6 +1093,95 @@ static int deflate_generic_decompress(struct acomp_req *req)
return ret;
}

+static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req,
+ struct idxd_wq *wq,
+ dma_addr_t src_addr, unsigned int slen,
+ dma_addr_t dst_addr, unsigned int *dlen,
+ u32 compression_crc);
+
+static void iaa_desc_complete(struct idxd_desc *idxd_desc,
+ enum idxd_complete_type comp_type,
+ bool free_desc, void *__ctx,
+ u32 *status)
+{
+ struct iaa_device_compression_mode *active_compression_mode;
+ struct iaa_compression_ctx *compression_ctx;
+ struct crypto_ctx *ctx = __ctx;
+ struct iaa_device *iaa_device;
+ struct idxd_device *idxd;
+ struct iaa_wq *iaa_wq;
+ struct pci_dev *pdev;
+ struct device *dev;
+ int ret, err = 0;
+
+ compression_ctx = crypto_tfm_ctx(ctx->tfm);
+
+ iaa_wq = idxd_wq_get_private(idxd_desc->wq);
+ iaa_device = iaa_wq->iaa_device;
+ idxd = iaa_device->idxd;
+ pdev = idxd->pdev;
+ dev = &pdev->dev;
+
+ active_compression_mode = get_iaa_device_compression_mode(iaa_device,
+ compression_ctx->mode);
+ dev_dbg(dev, "%s: compression mode %s,"
+ " ctx->src_addr %llx, ctx->dst_addr %llx\n", __func__,
+ active_compression_mode->name,
+ ctx->src_addr, ctx->dst_addr);
+
+ ret = check_completion(dev, idxd_desc->iax_completion,
+ ctx->compress, false);
+ if (ret) {
+ dev_dbg(dev, "%s: check_completion failed ret=%d\n", __func__, ret);
+ if (ctx->compress == false &&
+ idxd_desc->iax_completion->status == IAA_ANALYTICS_ERROR) {
+ pr_warn("%s: falling back to deflate-generic decompress, "
+ "analytics error code %x\n", __func__,
+ idxd_desc->iax_completion->error_code);
+ ret = deflate_generic_decompress(ctx->req);
+ if (ret) {
+ dev_dbg(dev, "%s: deflate-generic failed ret=%d\n",
+ __func__, ret);
+ err = -EIO;
+ goto err;
+ }
+ } else {
+ err = -EIO;
+ goto err;
+ }
+ } else {
+ ctx->req->dlen = idxd_desc->iax_completion->output_size;
+ }
+
+ if (ctx->compress && compression_ctx->verify_compress) {
+ u32 compression_crc;
+
+ compression_crc = idxd_desc->iax_completion->crc;
+ dma_sync_sg_for_device(dev, ctx->req->dst, 1, DMA_FROM_DEVICE);
+ dma_sync_sg_for_device(dev, ctx->req->src, 1, DMA_TO_DEVICE);
+ ret = iaa_compress_verify(ctx->tfm, ctx->req, iaa_wq->wq, ctx->src_addr,
+ ctx->req->slen, ctx->dst_addr, &ctx->req->dlen,
+ compression_crc);
+ if (ret) {
+ dev_dbg(dev, "%s: compress verify failed ret=%d\n", __func__, ret);
+ err = -EIO;
+ }
+ }
+err:
+ if (ctx->req->base.complete)
+ acomp_request_complete(ctx->req, err);
+
+ dma_unmap_sg(dev, ctx->req->dst, sg_nents(ctx->req->dst), DMA_FROM_DEVICE);
+ dma_unmap_sg(dev, ctx->req->src, sg_nents(ctx->req->src), DMA_TO_DEVICE);
+
+ if (ret != 0)
+ dev_dbg(dev, "asynchronous compress failed ret=%d\n", ret);
+
+ if (free_desc)
+ idxd_free_desc(idxd_desc->wq, idxd_desc);
+ iaa_wq_put(idxd_desc->wq);
+}
+
static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
struct idxd_wq *wq,
dma_addr_t src_addr, unsigned int slen,
@@ -1045,6 +1230,22 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
desc->src2_size = sizeof(struct aecs_comp_table_record);
desc->completion_addr = idxd_desc->compl_dma;

+ if (ctx->use_irq) {
+ desc->flags |= IDXD_OP_FLAG_RCI;
+
+ idxd_desc->crypto.req = req;
+ idxd_desc->crypto.tfm = tfm;
+ idxd_desc->crypto.src_addr = src_addr;
+ idxd_desc->crypto.dst_addr = dst_addr;
+ idxd_desc->crypto.compress = true;
+
+ dev_dbg(dev, "%s use_async_irq: compression mode %s,"
+ " src_addr %llx, dst_addr %llx\n", __func__,
+ active_compression_mode->name,
+ src_addr, dst_addr);
+ } else if (ctx->async_mode && !disable_async)
+ req->base.data = idxd_desc;
+
dev_dbg(dev, "%s: compression mode %s,"
" desc->src1_addr %llx, desc->src1_size %d,"
" desc->dst_addr %llx, desc->max_dst_size %d,"
@@ -1059,6 +1260,12 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
goto err;
}

+ if (ctx->async_mode && !disable_async) {
+ ret = -EINPROGRESS;
+ dev_dbg(dev, "%s: returning -EINPROGRESS\n", __func__);
+ goto out;
+ }
+
ret = check_completion(dev, idxd_desc->iax_completion, true, false);
if (ret) {
dev_dbg(dev, "check_completion failed ret=%d\n", ret);
@@ -1069,7 +1276,8 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,

*compression_crc = idxd_desc->iax_completion->crc;

- idxd_free_desc(wq, idxd_desc);
+ if (!ctx->async_mode)
+ idxd_free_desc(wq, idxd_desc);
out:
return ret;
err:
@@ -1212,6 +1420,22 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,
desc->src1_size = slen;
desc->completion_addr = idxd_desc->compl_dma;

+ if (ctx->use_irq && !disable_async) {
+ desc->flags |= IDXD_OP_FLAG_RCI;
+
+ idxd_desc->crypto.req = req;
+ idxd_desc->crypto.tfm = tfm;
+ idxd_desc->crypto.src_addr = src_addr;
+ idxd_desc->crypto.dst_addr = dst_addr;
+ idxd_desc->crypto.compress = false;
+
+ dev_dbg(dev, "%s: use_async_irq compression mode %s,"
+ " src_addr %llx, dst_addr %llx\n", __func__,
+ active_compression_mode->name,
+ src_addr, dst_addr);
+ } else if (ctx->async_mode && !disable_async)
+ req->base.data = idxd_desc;
+
dev_dbg(dev, "%s: decompression mode %s,"
" desc->src1_addr %llx, desc->src1_size %d,"
" desc->dst_addr %llx, desc->max_dst_size %d,"
@@ -1226,6 +1450,12 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,
goto err;
}

+ if (ctx->async_mode && !disable_async) {
+ ret = -EINPROGRESS;
+ dev_dbg(dev, "%s: returning -EINPROGRESS\n", __func__);
+ goto out;
+ }
+
ret = check_completion(dev, idxd_desc->iax_completion, false, false);
if (ret) {
dev_dbg(dev, "%s: check_completion failed ret=%d\n", __func__, ret);
@@ -1248,7 +1478,8 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,

*dlen = req->dlen;

- idxd_free_desc(wq, idxd_desc);
+ if (!ctx->async_mode)
+ idxd_free_desc(wq, idxd_desc);
out:
return ret;
err:
@@ -1546,6 +1777,8 @@ static int iaa_comp_adecompress(struct acomp_req *req)
static void compression_ctx_init(struct iaa_compression_ctx *ctx)
{
ctx->verify_compress = iaa_verify_compress;
+ ctx->async_mode = async_mode;
+ ctx->use_irq = use_irq;
}

static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm)
@@ -1753,6 +1986,7 @@ static struct idxd_device_driver iaa_crypto_driver = {
.remove = iaa_crypto_remove,
.name = IDXD_SUBDRIVER_NAME,
.type = dev_types,
+ .desc_complete = iaa_desc_complete,
};

static int __init iaa_crypto_init_module(void)
@@ -1791,10 +2025,20 @@ static int __init iaa_crypto_init_module(void)
goto err_verify_attr_create;
}

+ ret = driver_create_file(&iaa_crypto_driver.drv,
+ &driver_attr_sync_mode);
+ if (ret) {
+ pr_debug("IAA sync mode attr creation failed\n");
+ goto err_sync_attr_create;
+ }
+
pr_debug("initialized\n");
out:
return ret;

+err_sync_attr_create:
+ driver_remove_file(&iaa_crypto_driver.drv,
+ &driver_attr_verify_compress);
err_verify_attr_create:
idxd_driver_unregister(&iaa_crypto_driver);
err_driver_reg:
@@ -1810,6 +2054,8 @@ static void __exit iaa_crypto_cleanup_module(void)
if (iaa_unregister_compression_device())
pr_debug("IAA compression device unregister failed\n");

+ driver_remove_file(&iaa_crypto_driver.drv,
+ &driver_attr_sync_mode);
driver_remove_file(&iaa_crypto_driver.drv,
&driver_attr_verify_compress);
idxd_driver_unregister(&iaa_crypto_driver);
--
2.34.1