2023-07-10 19:09:12

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v7 00/14] crypto: Add Intel Analytics Accelerator (IAA) crypto compression driver

Hi, this is v7 of the IAA crypto driver, incorporating feedback from
v6.

v7 changes:

- Rebased to current cryptodev tree.

- Removed 'canned' compression mode (deflate-iaa-canned) and fixed
up dependencies in other patches and Documentation.

- Removed op_block checks.

- Removed a stray debugging #ifdef.

- Changed sysfs-driver-dma-idxd driver_name version to 6.6.0.


v6 changes:

- Rebased to current cryptodev tree.

- Changed code to create/register separate algorithms for each
compression mode - one for 'fixed' (deflate-iaa) and one for
'canned' (deflate-iaa-canned).

- Got rid of 'compression_mode' attribute and all the compression
mode code that deals with 'active' compression modes, since
there's no longer a single active compression mode.

- Use crypto_ctx to allow common compress/decompress code to
distinguish between the different compression modes. Also use it
to capture settings such as verify_compress, use_irq, etc. In
addition to being cleaner, this will allow for easier performance
comparisons using different modes/settings.

- Update Documentation and comments to reflect the changes.

- Fixed a bug found by Rex Zhang in decompress_header() which
unmapped src2 rather than src as it should have. Thanks, Rex!


v5 changes:

- Rebased to current cryptodev tree.

- Changed sysfs-driver-dma-idxd driver_name version to 6.5.0.

- Renamed wq private accessor functions to idxd_wq_set/get_private().

v4 changes:

- Added and used DRIVER_NAME_SIZE for wq driver_name.

- Changed all spaces to tabs in CRYPTO_DEV_IAA_CRYPTO_STATS config
menu.

- Removed the private_data void * from wq and replaced with
wq_confdev() instead, as suggested by Dave Jiang.

- Added more Reviewed-by tags.

v3 changes:

- Reworked the code to only allow the registered crypto alg to be
unregistered by removing the module. Also added an iaa_wq_get()
and iaa_wq_put() to take/give up a reference to the work queue
while there are compresses/decompresses in flight. This is
synchronized with the wq remove function, so that the
iaa_wq/iaa_devices can't go away beneath active operations. This
was tested by removing/disabling the iaa wqs/devices while
operations were in flight.

- Simplified the rebalance code and removed cpu_to_iaa() function
since it was overly complicated and wasn't actually working as
advertised.

- As a result of reworking the above code, fixed several bugs such
as possibly unregistering an unregistered crypto alg, a memory
leak where iaa_wqs weren't being freed, and making sure the
compression schemes were registered before registering the driver.

- Added set_/idxd_wq_private() accessors for wq private data.

- Added missing XPORT_SYMBOL_NS_GPL() to [PATCH 04/15] dmaengine:
idxd: Export descriptor management functions

- Added Dave's Reviewed-by: tags from v2.

- Updated Documentation and commit messages to reflect the changes
above.

- Rebased to to cryptodev tree, since that has the earlier changes
that moved the intel drivers to crypto/intel.

v2 changes:

- Removed legacy interface and all related code; merged async
interface into main deflate patch.

- Added support for the null destination case. Thanks to Giovanni
Cabiddu for making me aware of this as well as the selftests for
it.

- Had to do some rearrangement of the code in order to pass all the
selftests. Also added a testcase for 'canned'.

- Moved the iaa crypto driver to drivers/crypto/intel, and moved all
the other intel drivers there as well (which will be posted as a
separate series immediately following this one).

- Added an iaa crypto section to MAINTAINERS.

- Updated the documenation and commit messages to reflect the removal
of the legacy interface.

- Changed kernel version from 6.3.0 to 6.4.0 in patch 01/15 (wq
driver name support)

v1:

This series adds Linux crypto algorithm support for Intel® In-memory
Analytics Accelerator (Intel IAA) [1] hardware compression and
decompression, which is available on Sapphire Rapids systems.

The IAA crypto support is implemented as an IDXD sub-driver. The IDXD
driver already present in the kernel provides discovery and management
of the IAA devices on a system, as well as all the functionality
needed to manage, submit, and wait for completion of work executed on
them. The first 7 patches (patches starting with dmaengine:) add
small bits of underlying IDXD plumbing needed to allow external
sub-drivers to take advantage of this support and claim ownership of
specific IAA devices and workqueues.

The remaining patches add the main support for this feature via the
crypto API, making it transparently accessible to kernel features that
can make use of it such as zswap and zram (patches starting with
crypto – iaa:).

These include both sync/async support for the deflate algorithm
implemented by the IAA hardware, as well as an additional option for
driver statistics and Documentation.

Patch 8 ('[PATCH 08/15] crypto: iaa - Add IAA Compression Accelerator
Documentation') describes the IAA crypto driver in detail; the
following is just a high-level synopsis meant to aid the following
discussion.

The IAA hardware is fairly complex and generally requires a
knowledgeable administrator with sufficiently detailed understanding
of the hardware to set it up before it can be used. As mentioned in
the Documentation, this typically requires using a special tool called
accel-config to enumerate and configure IAA workqueues, engines, etc,
although this can also be done using only sysfs files.

The operation of the driver mirrors this requirement and only allows
the hardware to be accessed via the crypto layer once the hardware has
been configured and bound to the the IAA crypto driver. As an IDXD
sub-driver, the IAA crypto driver essentially takes ownership of the
hardware until it is given up explicitly by the administrator. This
occurs automatically when the administrator enables the first IAA
workqueue or disables the last one; the iaa_crypto (sync and async)
algorithms are registered when the first workqueue is enabled, and
deregistered when the last one is disabled.

The normal sequence of operations would normally be:

< configure the hardware using accel-config or sysfs >

< configure the iaa crypto driver (see below) >

< configure the subsystem e.g. zswap/zram to use the iaa_crypto algo >

< run the workload >

There are a small number of iaa_crypto driver attributes that the
administrator can configure, and which also need to be configured
before the algorithm is enabled:

compression_mode:

The IAA crypto driver supports an extensible interface supporting
any number of different compression modes that can be tailored to
specific types of workloads. These are implemented as tables and
given arbitrary names describing their intent.

There are currently only 2 compression modes, “canned” and “fixed”.
In order to set a compression mode, echo the mode’s name to the
compression_mode driver attribute:

echo "canned" > /sys/bus/dsa/drivers/crypto/compression_mode

There are a few other available iaa_crypto driver attributes (see
Documentation for details) but the main one we want to consider in
detail for now is the ‘sync_mode’ attribute.

The ‘sync_mode’ attribute has 3 possible settings: ‘sync’, ‘async’,
and ‘async_irq’.

The context for these different modes is that although the iaa_crypto
driver implements the asynchronous crypto interface, the async
interface is currently only used in a synchronous way by facilities
like zswap that make use of it.

This is fine for software compress/decompress algorithms, since
there’s no real benefit in being able to use a truly asynchronous
interface with them. This isn’t the case, though, for hardware
compress/decompress engines such as IAA, where truly asynchronous
behavior is beneficial if not completely necessary to make optimal use
of the hardware.

The IAA crypto driver ‘sync_mode’ support should allow facilities such
as zswap to ‘support async (de)compression in some way [2]’ once
they are modified to actually make use of it.

When the ‘async_irq’ sync_mode is specified, the driver sets the bits
in the IAA work descriptor to generate an irq when the work completes.
So for every compression or decompression, the IAA acomp_alg
implementations called by crypto_acomp_compress/decompress() simply
set up the descriptor, turn on the 'request irq' bit and return
immediately with -EINPROGRESS. When the work completes, the irq fires
and the IDXD driver’s irq thread for that irq invokes the callback the
iaa_crypto module registered with IDXD. When the irq thread gets
scheduled, it wakes up the caller, which could be for instance zswap,
waiting synchronously via crypto_wait_req().

Using the simple madvise test program in '[PATCH 08/15] crypto: iaa -
Add IAA Compression Accelerator Documentation' along with a set of
pages from the spec17 benchmark and tracepoint instrumentation
measuring the time taken between the start and end of each compress
and decompress, this case, async_irq, takes on average 6,847 ns for
compression and 5,840 ns for decompression. (See Table 1 below for a
summary of all the tests.)

When sync_mode is set to ‘sync’, the interrupt bit is not set and the
work descriptor is submitted in the same way it was for the previous
case. In this case the call doesn’t return but rather loops around
waiting in the iaa_crypto driver’s check_completion() function which
continually checks the descriptor’s completion bit until it finds it
set to ‘completed’. It then returns to the caller, again for example
zswap waiting in crypto_wait_req(). From the standpoint of zswap,
this case is exactly the same as the previous case, the difference
seen only in the crypto layer and the iaa_crypto driver internally;
from its standpoint they’re both synchronous calls. There is however
a large performance difference: an average of 3,177 ns for compress
and 2,235 ns for decompress.

The final sync_mode is ‘async’. In this case also the interrupt bit
is not set and the work descriptor is submitted, returning immediately
to the caller with -EINPROGRESS. Because there’s no interrupt set to
notify anyone when the work completes, the caller needs to somehow
check for work completion. Because core code like zswap can’t do this
directly by for example calling iaa_crypto’s check_completion(), there
would need to be some changes made to code like zswap and the crypto
layer in order to take advantage of this mode. As such, there are no
numbers to share for this mode.

Finally, just a quick discussion of the remaining numbers in Table 1,
those comparing the iaa_crypto sync and async irq cases to software
deflate. Software deflate took average of 108,978 ns for compress and
14,485 ns for decompress.

As can be seen from Table 1, the numbers using the iaa_crypto driver
for deflate as compared to software are so much better that merging it
would seem to make sense on its own merits. The 'async' sync_mode
described above, however, offers the possibility of even greater gains
to be had against higher-performing algorithms such as lzo, via
parallelization, once the calling facilities are modified to take
advantage of it. Follow-up patchsets to this one will demonstrate
concretely how that might be accomplished.

Thanks,

Tom


Table 1. Zswap latency and compression numbers (in ns):

Algorithm compress decompress
----------------------------------------------------------
iaa sync 3,177 2,235
iaa async irq 6,847 5,840
software deflate 108,978 14,485

[1] https://cdrdv2.intel.com/v1/dl/getContent/721858

[2] https://lore.kernel.org/lkml/[email protected]/


Dave Jiang (2):
dmaengine: idxd: add wq driver name support for accel-config user tool
dmaengine: idxd: add external module driver support for dsa_bus_type

Tom Zanussi (12):
dmaengine: idxd: Export drv_enable/disable and related functions
dmaengine: idxd: Export descriptor management functions
dmaengine: idxd: Export wq resource management functions
dmaengine: idxd: Add wq private data accessors
dmaengine: idxd: add callback support for iaa crypto
crypto: iaa - Add IAA Compression Accelerator Documentation
crypto: iaa - Add Intel IAA Compression Accelerator crypto driver core
crypto: iaa - Add per-cpu workqueue table with rebalancing
crypto: iaa - Add compression mode management along with fixed mode
crypto: iaa - Add support for deflate-iaa compression algorithm
crypto: iaa - Add irq support for the crypto async interface
crypto: iaa - Add IAA Compression Accelerator stats

.../ABI/stable/sysfs-driver-dma-idxd | 6 +
.../driver-api/crypto/iaa/iaa-crypto.rst | 645 ++++++
Documentation/driver-api/crypto/iaa/index.rst | 20 +
Documentation/driver-api/crypto/index.rst | 20 +
Documentation/driver-api/index.rst | 1 +
MAINTAINERS | 7 +
crypto/testmgr.c | 10 +
drivers/crypto/intel/Kconfig | 1 +
drivers/crypto/intel/Makefile | 1 +
drivers/crypto/intel/iaa/Kconfig | 19 +
drivers/crypto/intel/iaa/Makefile | 12 +
drivers/crypto/intel/iaa/iaa_crypto.h | 182 ++
.../crypto/intel/iaa/iaa_crypto_comp_fixed.c | 92 +
drivers/crypto/intel/iaa/iaa_crypto_main.c | 2056 +++++++++++++++++
drivers/crypto/intel/iaa/iaa_crypto_stats.c | 271 +++
drivers/crypto/intel/iaa/iaa_crypto_stats.h | 58 +
drivers/dma/idxd/bus.c | 6 +
drivers/dma/idxd/cdev.c | 7 +
drivers/dma/idxd/device.c | 9 +-
drivers/dma/idxd/dma.c | 9 +-
drivers/dma/idxd/idxd.h | 84 +-
drivers/dma/idxd/irq.c | 12 +-
drivers/dma/idxd/submit.c | 9 +-
drivers/dma/idxd/sysfs.c | 28 +
include/uapi/linux/idxd.h | 1 +
25 files changed, 3546 insertions(+), 20 deletions(-)
create mode 100644 Documentation/driver-api/crypto/iaa/iaa-crypto.rst
create mode 100644 Documentation/driver-api/crypto/iaa/index.rst
create mode 100644 Documentation/driver-api/crypto/index.rst
create mode 100644 drivers/crypto/intel/iaa/Kconfig
create mode 100644 drivers/crypto/intel/iaa/Makefile
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto.h
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto_comp_fixed.c
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto_main.c
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto_stats.c
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto_stats.h

--
2.34.1



2023-07-10 19:09:14

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v7 03/14] dmaengine: idxd: Export drv_enable/disable and related functions

To allow idxd sub-drivers to enable and disable wqs, export them.

Signed-off-by: Tom Zanussi <[email protected]>
Reviewed-by: Dave Jiang <[email protected]>
Reviewed-by: Fenghua Yu <[email protected]>
---
drivers/dma/idxd/device.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
index 5abbcc61c528..87ad95fa3f98 100644
--- a/drivers/dma/idxd/device.c
+++ b/drivers/dma/idxd/device.c
@@ -1505,6 +1505,7 @@ int drv_enable_wq(struct idxd_wq *wq)
err:
return rc;
}
+EXPORT_SYMBOL_NS_GPL(drv_enable_wq, IDXD);

void drv_disable_wq(struct idxd_wq *wq)
{
@@ -1526,6 +1527,7 @@ void drv_disable_wq(struct idxd_wq *wq)
wq->type = IDXD_WQT_NONE;
wq->client_count = 0;
}
+EXPORT_SYMBOL_NS_GPL(drv_disable_wq, IDXD);

int idxd_device_drv_probe(struct idxd_dev *idxd_dev)
{
--
2.34.1


2023-07-10 19:09:18

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v7 06/14] dmaengine: idxd: Add wq private data accessors

Add the accessors idxd_wq_set_private() and idxd_wq_get_private()
allowing users to set and retrieve a private void * associated with an
idxd_wq.

The private data is stored in the idxd_dev.conf_dev associated with
each idxd_wq.

Signed-off-by: Tom Zanussi <[email protected]>
---
drivers/dma/idxd/idxd.h | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
index 276b5f9cf967..971daf323655 100644
--- a/drivers/dma/idxd/idxd.h
+++ b/drivers/dma/idxd/idxd.h
@@ -609,6 +609,16 @@ static inline int idxd_wq_refcount(struct idxd_wq *wq)
return wq->client_count;
};

+static inline void idxd_wq_set_private(struct idxd_wq *wq, void *private)
+{
+ dev_set_drvdata(wq_confdev(wq), private);
+}
+
+static inline void *idxd_wq_get_private(struct idxd_wq *wq)
+{
+ return dev_get_drvdata(wq_confdev(wq));
+}
+
/*
* Intel IAA does not support batch processing.
* The max batch size of device, max batch size of wq and
--
2.34.1


2023-07-10 19:09:20

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v7 04/14] dmaengine: idxd: Export descriptor management functions

To allow idxd sub-drivers to access the descriptor management
functions, export them.

Signed-off-by: Tom Zanussi <[email protected]>
Reviewed-by: Dave Jiang <[email protected]>
Reviewed-by: Fenghua Yu <[email protected]>
---
drivers/dma/idxd/submit.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/drivers/dma/idxd/submit.c b/drivers/dma/idxd/submit.c
index c01db23e3333..5e651e216094 100644
--- a/drivers/dma/idxd/submit.c
+++ b/drivers/dma/idxd/submit.c
@@ -61,6 +61,7 @@ struct idxd_desc *idxd_alloc_desc(struct idxd_wq *wq, enum idxd_op_type optype)

return __get_desc(wq, idx, cpu);
}
+EXPORT_SYMBOL_NS_GPL(idxd_alloc_desc, IDXD);

void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc)
{
@@ -69,6 +70,7 @@ void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc)
desc->cpu = -1;
sbitmap_queue_clear(&wq->sbq, desc->id, cpu);
}
+EXPORT_SYMBOL_NS_GPL(idxd_free_desc, IDXD);

static struct idxd_desc *list_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
struct idxd_desc *desc)
@@ -215,3 +217,4 @@ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
percpu_ref_put(&wq->wq_active);
return 0;
}
+EXPORT_SYMBOL_NS_GPL(idxd_submit_desc, IDXD);
--
2.34.1


2023-07-10 19:09:21

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v7 02/14] dmaengine: idxd: add external module driver support for dsa_bus_type

From: Dave Jiang <[email protected]>

Add support to allow an external driver to be registered to the
dsa_bus_type and also auto-loaded.

Signed-off-by: Dave Jiang <[email protected]>
Signed-off-by: Tom Zanussi <[email protected]>
---
drivers/dma/idxd/bus.c | 6 ++++++
drivers/dma/idxd/idxd.h | 3 +++
2 files changed, 9 insertions(+)

diff --git a/drivers/dma/idxd/bus.c b/drivers/dma/idxd/bus.c
index 6f84621053c6..0c9e689a2e77 100644
--- a/drivers/dma/idxd/bus.c
+++ b/drivers/dma/idxd/bus.c
@@ -67,11 +67,17 @@ static void idxd_config_bus_remove(struct device *dev)
idxd_drv->remove(idxd_dev);
}

+static int idxd_bus_uevent(const struct device *dev, struct kobj_uevent_env *env)
+{
+ return add_uevent_var(env, "MODALIAS=" IDXD_DEVICES_MODALIAS_FMT, 0);
+}
+
struct bus_type dsa_bus_type = {
.name = "dsa",
.match = idxd_config_bus_match,
.probe = idxd_config_bus_probe,
.remove = idxd_config_bus_remove,
+ .uevent = idxd_bus_uevent,
};
EXPORT_SYMBOL_GPL(dsa_bus_type);

diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
index c62c78e1c9fa..276b5f9cf967 100644
--- a/drivers/dma/idxd/idxd.h
+++ b/drivers/dma/idxd/idxd.h
@@ -646,6 +646,9 @@ static inline int idxd_wq_driver_name_match(struct idxd_wq *wq, struct device *d
return (strncmp(wq->driver_name, dev->driver->name, strlen(dev->driver->name)) == 0);
}

+#define MODULE_ALIAS_IDXD_DEVICE(type) MODULE_ALIAS("idxd:t" __stringify(type) "*")
+#define IDXD_DEVICES_MODALIAS_FMT "idxd:t%d"
+
int __must_check __idxd_driver_register(struct idxd_device_driver *idxd_drv,
struct module *module, const char *mod_name);
#define idxd_driver_register(driver) \
--
2.34.1


2023-07-10 19:09:22

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v7 05/14] dmaengine: idxd: Export wq resource management functions

To allow idxd sub-drivers to access the wq resource management
functions, export them.

Signed-off-by: Tom Zanussi <[email protected]>
Reviewed-by: Dave Jiang <[email protected]>
Reviewed-by: Fenghua Yu <[email protected]>
---
drivers/dma/idxd/device.c | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
index 87ad95fa3f98..626600bd394b 100644
--- a/drivers/dma/idxd/device.c
+++ b/drivers/dma/idxd/device.c
@@ -161,6 +161,7 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq)
free_hw_descs(wq);
return rc;
}
+EXPORT_SYMBOL_NS_GPL(idxd_wq_alloc_resources, IDXD);

void idxd_wq_free_resources(struct idxd_wq *wq)
{
@@ -174,6 +175,7 @@ void idxd_wq_free_resources(struct idxd_wq *wq)
dma_free_coherent(dev, wq->compls_size, wq->compls, wq->compls_addr);
sbitmap_queue_free(&wq->sbq);
}
+EXPORT_SYMBOL_NS_GPL(idxd_wq_free_resources, IDXD);

int idxd_wq_enable(struct idxd_wq *wq)
{
@@ -422,6 +424,7 @@ int idxd_wq_init_percpu_ref(struct idxd_wq *wq)
reinit_completion(&wq->wq_resurrect);
return 0;
}
+EXPORT_SYMBOL_NS_GPL(idxd_wq_init_percpu_ref, IDXD);

void __idxd_wq_quiesce(struct idxd_wq *wq)
{
@@ -431,6 +434,7 @@ void __idxd_wq_quiesce(struct idxd_wq *wq)
complete_all(&wq->wq_resurrect);
wait_for_completion(&wq->wq_dead);
}
+EXPORT_SYMBOL_NS_GPL(__idxd_wq_quiesce, IDXD);

void idxd_wq_quiesce(struct idxd_wq *wq)
{
@@ -438,6 +442,7 @@ void idxd_wq_quiesce(struct idxd_wq *wq)
__idxd_wq_quiesce(wq);
mutex_unlock(&wq->wq_lock);
}
+EXPORT_SYMBOL_NS_GPL(idxd_wq_quiesce, IDXD);

/* Device control bits */
static inline bool idxd_is_enabled(struct idxd_device *idxd)
--
2.34.1


2023-07-10 19:09:28

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v7 10/14] crypto: iaa - Add per-cpu workqueue table with rebalancing

The iaa compression/decompression algorithms in later patches need a
way to retrieve an appropriate IAA workqueue depending on how close
the associated IAA device is to the current cpu.

For this purpose, add a per-cpu array of workqueues such that an
appropriate workqueue can be retrieved by simply accessing the per-cpu
array.

Whenever a new workqueue is bound to or unbound from the iaa_crypto
driver, the available workqueues are 'rebalanced' such that work
submitted from a particular CPU is given to the most appropriate
workqueue available. There currently isn't any way for the user to
tweak the way this is done internally - if necessary, knobs can be
added later for that purpose. Current best practice is to configure
and bind at least one workqueue for each IAA device, but as long as
there is at least one workqueue configured and bound to any IAA device
in the system, the iaa_crypto driver will work, albeit most likely not
as efficiently.

[ Based on work originally by George Powley, Jing Lin and Kyung Min
Park ]

Signed-off-by: Tom Zanussi <[email protected]>
---
drivers/crypto/intel/iaa/iaa_crypto.h | 7 +
drivers/crypto/intel/iaa/iaa_crypto_main.c | 221 +++++++++++++++++++++
2 files changed, 228 insertions(+)

diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h
index 5d1fff7f4b8e..c25546fa87f7 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto.h
+++ b/drivers/crypto/intel/iaa/iaa_crypto.h
@@ -27,4 +27,11 @@ struct iaa_device {
struct list_head wqs;
};

+struct wq_table_entry {
+ struct idxd_wq **wqs;
+ int max_wqs;
+ int n_wqs;
+ int cur_wq;
+};
+
#endif
diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
index ab1bd0b00d89..670e9c3c8f9a 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
+++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
@@ -22,6 +22,46 @@

/* number of iaa instances probed */
static unsigned int nr_iaa;
+static unsigned int nr_cpus;
+static unsigned int nr_nodes;
+static unsigned int nr_cpus_per_node;
+
+/* Number of physical cpus sharing each iaa instance */
+static unsigned int cpus_per_iaa;
+
+/* Per-cpu lookup table for balanced wqs */
+static struct wq_table_entry __percpu *wq_table;
+
+static void wq_table_add(int cpu, struct idxd_wq *wq)
+{
+ struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu);
+
+ if (WARN_ON(entry->n_wqs == entry->max_wqs))
+ return;
+
+ entry->wqs[entry->n_wqs++] = wq;
+
+ pr_debug("%s: added iaa wq %d.%d to idx %d of cpu %d\n", __func__,
+ entry->wqs[entry->n_wqs - 1]->idxd->id,
+ entry->wqs[entry->n_wqs - 1]->id, entry->n_wqs - 1, cpu);
+}
+
+static void wq_table_free_entry(int cpu)
+{
+ struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu);
+
+ kfree(entry->wqs);
+ memset(entry, 0, sizeof(*entry));
+}
+
+static void wq_table_clear_entry(int cpu)
+{
+ struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu);
+
+ entry->n_wqs = 0;
+ entry->cur_wq = 0;
+ memset(entry->wqs, 0, entry->max_wqs * sizeof(struct idxd_wq *));
+}

static LIST_HEAD(iaa_devices);
static DEFINE_MUTEX(iaa_devices_lock);
@@ -141,6 +181,53 @@ static void del_iaa_wq(struct iaa_device *iaa_device, struct idxd_wq *wq)
}
}

+static void clear_wq_table(void)
+{
+ int cpu;
+
+ for (cpu = 0; cpu < nr_cpus; cpu++)
+ wq_table_clear_entry(cpu);
+
+ pr_debug("cleared wq table\n");
+}
+
+static void free_wq_table(void)
+{
+ int cpu;
+
+ for (cpu = 0; cpu < nr_cpus; cpu++)
+ wq_table_free_entry(cpu);
+
+ free_percpu(wq_table);
+
+ pr_debug("freed wq table\n");
+}
+
+static int alloc_wq_table(int max_wqs)
+{
+ struct wq_table_entry *entry;
+ int cpu;
+
+ wq_table = alloc_percpu(struct wq_table_entry);
+ if (!wq_table)
+ return -ENOMEM;
+
+ for (cpu = 0; cpu < nr_cpus; cpu++) {
+ entry = per_cpu_ptr(wq_table, cpu);
+ entry->wqs = kzalloc(GFP_KERNEL, max_wqs * sizeof(struct wq *));
+ if (!entry->wqs) {
+ free_wq_table();
+ return -ENOMEM;
+ }
+
+ entry->max_wqs = max_wqs;
+ }
+
+ pr_debug("initialized wq table\n");
+
+ return 0;
+}
+
static int save_iaa_wq(struct idxd_wq *wq)
{
struct iaa_device *iaa_device, *found = NULL;
@@ -195,6 +282,8 @@ static int save_iaa_wq(struct idxd_wq *wq)
return -EINVAL;

idxd_wq_get(wq);
+
+ cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa;
out:
return 0;
}
@@ -210,6 +299,116 @@ static void remove_iaa_wq(struct idxd_wq *wq)
break;
}
}
+
+ if (nr_iaa)
+ cpus_per_iaa = (nr_nodes * nr_cpus_per_node) / nr_iaa;
+ else
+ cpus_per_iaa = 0;
+}
+
+static int wq_table_add_wqs(int iaa, int cpu)
+{
+ struct iaa_device *iaa_device, *found_device = NULL;
+ int ret = 0, cur_iaa = 0, n_wqs_added = 0;
+ struct idxd_device *idxd;
+ struct iaa_wq *iaa_wq;
+ struct pci_dev *pdev;
+ struct device *dev;
+
+ list_for_each_entry(iaa_device, &iaa_devices, list) {
+ idxd = iaa_device->idxd;
+ pdev = idxd->pdev;
+ dev = &pdev->dev;
+
+ if (cur_iaa != iaa) {
+ cur_iaa++;
+ continue;
+ }
+
+ found_device = iaa_device;
+ dev_dbg(dev, "getting wq from iaa_device %d, cur_iaa %d\n",
+ found_device->idxd->id, cur_iaa);
+ break;
+ }
+
+ if (!found_device) {
+ found_device = list_first_entry_or_null(&iaa_devices,
+ struct iaa_device, list);
+ if (!found_device) {
+ pr_debug("couldn't find any iaa devices with wqs!\n");
+ ret = -EINVAL;
+ goto out;
+ }
+ cur_iaa = 0;
+
+ idxd = found_device->idxd;
+ pdev = idxd->pdev;
+ dev = &pdev->dev;
+ dev_dbg(dev, "getting wq from only iaa_device %d, cur_iaa %d\n",
+ found_device->idxd->id, cur_iaa);
+ }
+
+ list_for_each_entry(iaa_wq, &found_device->wqs, list) {
+ wq_table_add(cpu, iaa_wq->wq);
+ pr_debug("rebalance: added wq for cpu=%d: iaa wq %d.%d\n",
+ cpu, iaa_wq->wq->idxd->id, iaa_wq->wq->id);
+ n_wqs_added++;
+ };
+
+ if (!n_wqs_added) {
+ pr_debug("couldn't find any iaa wqs!\n");
+ ret = -EINVAL;
+ goto out;
+ }
+out:
+ return ret;
+}
+
+/*
+ * Rebalance the wq table so that given a cpu, it's easy to find the
+ * closest IAA instance. The idea is to try to choose the most
+ * appropriate IAA instance for a caller and spread available
+ * workqueues around to clients.
+ */
+static void rebalance_wq_table(void)
+{
+ const struct cpumask *node_cpus;
+ int node, cpu, iaa = -1;
+
+ if (nr_iaa == 0)
+ return;
+
+ pr_debug("rebalance: nr_nodes=%d, nr_cpus %d, nr_iaa %d, cpus_per_iaa %d\n",
+ nr_nodes, nr_cpus, nr_iaa, cpus_per_iaa);
+
+ clear_wq_table();
+
+ if (nr_iaa == 1) {
+ for (cpu = 0; cpu < nr_cpus; cpu++) {
+ if (WARN_ON(wq_table_add_wqs(0, cpu))) {
+ pr_debug("could not add any wqs for iaa 0 to cpu %d!\n", cpu);
+ return;
+ }
+ }
+
+ return;
+ }
+
+ for_each_online_node(node) {
+ node_cpus = cpumask_of_node(node);
+
+ for (cpu = 0; cpu < nr_cpus_per_node; cpu++) {
+ int node_cpu = cpumask_nth(cpu, node_cpus);
+
+ if ((cpu % cpus_per_iaa) == 0)
+ iaa++;
+
+ if (WARN_ON(wq_table_add_wqs(iaa, node_cpu))) {
+ pr_debug("could not add any wqs for iaa %d to cpu %d!\n", iaa, cpu);
+ return;
+ }
+ }
+ }
}

static int iaa_crypto_probe(struct idxd_dev *idxd_dev)
@@ -218,6 +417,7 @@ static int iaa_crypto_probe(struct idxd_dev *idxd_dev)
struct idxd_device *idxd = wq->idxd;
struct idxd_driver_data *data = idxd->data;
struct device *dev = &idxd_dev->conf_dev;
+ bool first_wq = false;
int ret = 0;

if (idxd->state != IDXD_DEV_ENABLED)
@@ -248,10 +448,19 @@ static int iaa_crypto_probe(struct idxd_dev *idxd_dev)

mutex_lock(&iaa_devices_lock);

+ if (list_empty(&iaa_devices)) {
+ ret = alloc_wq_table(wq->idxd->max_wqs);
+ if (ret)
+ goto err_alloc;
+ first_wq = true;
+ }
+
ret = save_iaa_wq(wq);
if (ret)
goto err_save;

+ rebalance_wq_table();
+
mutex_unlock(&iaa_devices_lock);
out:
mutex_unlock(&wq->wq_lock);
@@ -259,6 +468,10 @@ static int iaa_crypto_probe(struct idxd_dev *idxd_dev)
return ret;

err_save:
+ if (first_wq)
+ free_wq_table();
+err_alloc:
+ mutex_unlock(&iaa_devices_lock);
drv_disable_wq(wq);
err:
wq->type = IDXD_WQT_NONE;
@@ -277,6 +490,10 @@ static void iaa_crypto_remove(struct idxd_dev *idxd_dev)

remove_iaa_wq(wq);
drv_disable_wq(wq);
+ rebalance_wq_table();
+
+ if (nr_iaa == 0)
+ free_wq_table();

mutex_unlock(&iaa_devices_lock);
mutex_unlock(&wq->wq_lock);
@@ -298,6 +515,10 @@ static int __init iaa_crypto_init_module(void)
{
int ret = 0;

+ nr_cpus = num_online_cpus();
+ nr_nodes = num_online_nodes();
+ nr_cpus_per_node = nr_cpus / nr_nodes;
+
ret = idxd_driver_register(&iaa_crypto_driver);
if (ret) {
pr_debug("IAA wq sub-driver registration failed\n");
--
2.34.1


2023-07-10 19:09:33

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v7 11/14] crypto: iaa - Add compression mode management along with fixed mode

Define an in-kernel API for adding and removing compression modes,
which can be used by kernel modules or other kernel code that
implements IAA compression modes.

Also add a separate file, iaa_crypto_comp_fixed.c, containing huffman
tables generated for the IAA 'fixed' compression mode. Future
compression modes can be added in a similar fashion.

One or more crypto compression algorithms will be created for each
compression mode, each of which can be selected as the compression
algorithm to be used by a particular facility.

Signed-off-by: Tom Zanussi <[email protected]>
---
drivers/crypto/intel/iaa/Makefile | 2 +-
drivers/crypto/intel/iaa/iaa_crypto.h | 85 +++++
.../crypto/intel/iaa/iaa_crypto_comp_fixed.c | 92 +++++
drivers/crypto/intel/iaa/iaa_crypto_main.c | 327 +++++++++++++++++-
4 files changed, 504 insertions(+), 2 deletions(-)
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto_comp_fixed.c

diff --git a/drivers/crypto/intel/iaa/Makefile b/drivers/crypto/intel/iaa/Makefile
index 03859431c897..cc87feffd059 100644
--- a/drivers/crypto/intel/iaa/Makefile
+++ b/drivers/crypto/intel/iaa/Makefile
@@ -7,4 +7,4 @@ ccflags-y += -I $(srctree)/drivers/dma/idxd -DDEFAULT_SYMBOL_NAMESPACE=IDXD

obj-$(CONFIG_CRYPTO_DEV_IAA_CRYPTO) := iaa_crypto.o

-iaa_crypto-y := iaa_crypto_main.o
+iaa_crypto-y := iaa_crypto_main.o iaa_crypto_comp_fixed.o
diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h
index c25546fa87f7..33e68f9d3d02 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto.h
+++ b/drivers/crypto/intel/iaa/iaa_crypto.h
@@ -10,6 +10,11 @@

#define IDXD_SUBDRIVER_NAME "crypto"

+#define IAA_COMP_MODES_MAX 2
+
+#define FIXED_HDR 0x2
+#define FIXED_HDR_SIZE 3
+
/* Representation of IAA workqueue */
struct iaa_wq {
struct list_head list;
@@ -18,11 +23,23 @@ struct iaa_wq {
struct iaa_device *iaa_device;
};

+struct iaa_device_compression_mode {
+ const char *name;
+
+ struct aecs_comp_table_record *aecs_comp_table;
+ struct aecs_decomp_table_record *aecs_decomp_table;
+
+ dma_addr_t aecs_comp_table_dma_addr;
+ dma_addr_t aecs_decomp_table_dma_addr;
+};
+
/* Representation of IAA device with wqs, populated by probe */
struct iaa_device {
struct list_head list;
struct idxd_device *idxd;

+ struct iaa_device_compression_mode *compression_modes[IAA_COMP_MODES_MAX];
+
int n_wq;
struct list_head wqs;
};
@@ -34,4 +51,72 @@ struct wq_table_entry {
int cur_wq;
};

+#define IAA_AECS_ALIGN 32
+
+/*
+ * Analytics Engine Configuration and State (AECS) contains parameters and
+ * internal state of the analytics engine.
+ */
+struct aecs_comp_table_record {
+ u32 crc;
+ u32 xor_checksum;
+ u32 reserved0[5];
+ u32 num_output_accum_bits;
+ u8 output_accum[256];
+ u32 ll_sym[286];
+ u32 reserved1;
+ u32 reserved2;
+ u32 d_sym[30];
+ u32 reserved_padding[2];
+} __packed;
+
+/* AECS for decompress */
+struct aecs_decomp_table_record {
+ u32 crc;
+ u32 xor_checksum;
+ u32 low_filter_param;
+ u32 high_filter_param;
+ u32 output_mod_idx;
+ u32 drop_init_decomp_out_bytes;
+ u32 reserved[36];
+ u32 output_accum_data[2];
+ u32 out_bits_valid;
+ u32 bit_off_indexing;
+ u32 input_accum_data[64];
+ u8 size_qw[32];
+ u32 decomp_state[1220];
+} __packed;
+
+int iaa_aecs_init_fixed(void);
+void iaa_aecs_cleanup_fixed(void);
+
+typedef int (*iaa_dev_comp_init_fn_t) (struct iaa_device_compression_mode *mode);
+typedef int (*iaa_dev_comp_free_fn_t) (struct iaa_device_compression_mode *mode);
+
+struct iaa_compression_mode {
+ const char *name;
+ u32 *ll_table;
+ int ll_table_size;
+ u32 *d_table;
+ int d_table_size;
+ u32 *header_table;
+ int header_table_size;
+ u16 gen_decomp_table_flags;
+ iaa_dev_comp_init_fn_t init;
+ iaa_dev_comp_free_fn_t free;
+};
+
+int add_iaa_compression_mode(const char *name,
+ const u32 *ll_table,
+ int ll_table_size,
+ const u32 *d_table,
+ int d_table_size,
+ const u8 *header_table,
+ int header_table_size,
+ u16 gen_decomp_table_flags,
+ iaa_dev_comp_init_fn_t init,
+ iaa_dev_comp_free_fn_t free);
+
+void remove_iaa_compression_mode(const char *name);
+
#endif
diff --git a/drivers/crypto/intel/iaa/iaa_crypto_comp_fixed.c b/drivers/crypto/intel/iaa/iaa_crypto_comp_fixed.c
new file mode 100644
index 000000000000..e965da11b4d9
--- /dev/null
+++ b/drivers/crypto/intel/iaa/iaa_crypto_comp_fixed.c
@@ -0,0 +1,92 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Intel Corporation. All rights rsvd. */
+
+#include "idxd.h"
+#include "iaa_crypto.h"
+
+/*
+ * Fixed Huffman tables the IAA hardware requires to implement RFC-1951.
+ */
+const u32 fixed_ll_sym[286] = {
+ 0x40030, 0x40031, 0x40032, 0x40033, 0x40034, 0x40035, 0x40036, 0x40037,
+ 0x40038, 0x40039, 0x4003A, 0x4003B, 0x4003C, 0x4003D, 0x4003E, 0x4003F,
+ 0x40040, 0x40041, 0x40042, 0x40043, 0x40044, 0x40045, 0x40046, 0x40047,
+ 0x40048, 0x40049, 0x4004A, 0x4004B, 0x4004C, 0x4004D, 0x4004E, 0x4004F,
+ 0x40050, 0x40051, 0x40052, 0x40053, 0x40054, 0x40055, 0x40056, 0x40057,
+ 0x40058, 0x40059, 0x4005A, 0x4005B, 0x4005C, 0x4005D, 0x4005E, 0x4005F,
+ 0x40060, 0x40061, 0x40062, 0x40063, 0x40064, 0x40065, 0x40066, 0x40067,
+ 0x40068, 0x40069, 0x4006A, 0x4006B, 0x4006C, 0x4006D, 0x4006E, 0x4006F,
+ 0x40070, 0x40071, 0x40072, 0x40073, 0x40074, 0x40075, 0x40076, 0x40077,
+ 0x40078, 0x40079, 0x4007A, 0x4007B, 0x4007C, 0x4007D, 0x4007E, 0x4007F,
+ 0x40080, 0x40081, 0x40082, 0x40083, 0x40084, 0x40085, 0x40086, 0x40087,
+ 0x40088, 0x40089, 0x4008A, 0x4008B, 0x4008C, 0x4008D, 0x4008E, 0x4008F,
+ 0x40090, 0x40091, 0x40092, 0x40093, 0x40094, 0x40095, 0x40096, 0x40097,
+ 0x40098, 0x40099, 0x4009A, 0x4009B, 0x4009C, 0x4009D, 0x4009E, 0x4009F,
+ 0x400A0, 0x400A1, 0x400A2, 0x400A3, 0x400A4, 0x400A5, 0x400A6, 0x400A7,
+ 0x400A8, 0x400A9, 0x400AA, 0x400AB, 0x400AC, 0x400AD, 0x400AE, 0x400AF,
+ 0x400B0, 0x400B1, 0x400B2, 0x400B3, 0x400B4, 0x400B5, 0x400B6, 0x400B7,
+ 0x400B8, 0x400B9, 0x400BA, 0x400BB, 0x400BC, 0x400BD, 0x400BE, 0x400BF,
+ 0x48190, 0x48191, 0x48192, 0x48193, 0x48194, 0x48195, 0x48196, 0x48197,
+ 0x48198, 0x48199, 0x4819A, 0x4819B, 0x4819C, 0x4819D, 0x4819E, 0x4819F,
+ 0x481A0, 0x481A1, 0x481A2, 0x481A3, 0x481A4, 0x481A5, 0x481A6, 0x481A7,
+ 0x481A8, 0x481A9, 0x481AA, 0x481AB, 0x481AC, 0x481AD, 0x481AE, 0x481AF,
+ 0x481B0, 0x481B1, 0x481B2, 0x481B3, 0x481B4, 0x481B5, 0x481B6, 0x481B7,
+ 0x481B8, 0x481B9, 0x481BA, 0x481BB, 0x481BC, 0x481BD, 0x481BE, 0x481BF,
+ 0x481C0, 0x481C1, 0x481C2, 0x481C3, 0x481C4, 0x481C5, 0x481C6, 0x481C7,
+ 0x481C8, 0x481C9, 0x481CA, 0x481CB, 0x481CC, 0x481CD, 0x481CE, 0x481CF,
+ 0x481D0, 0x481D1, 0x481D2, 0x481D3, 0x481D4, 0x481D5, 0x481D6, 0x481D7,
+ 0x481D8, 0x481D9, 0x481DA, 0x481DB, 0x481DC, 0x481DD, 0x481DE, 0x481DF,
+ 0x481E0, 0x481E1, 0x481E2, 0x481E3, 0x481E4, 0x481E5, 0x481E6, 0x481E7,
+ 0x481E8, 0x481E9, 0x481EA, 0x481EB, 0x481EC, 0x481ED, 0x481EE, 0x481EF,
+ 0x481F0, 0x481F1, 0x481F2, 0x481F3, 0x481F4, 0x481F5, 0x481F6, 0x481F7,
+ 0x481F8, 0x481F9, 0x481FA, 0x481FB, 0x481FC, 0x481FD, 0x481FE, 0x481FF,
+ 0x38000, 0x38001, 0x38002, 0x38003, 0x38004, 0x38005, 0x38006, 0x38007,
+ 0x38008, 0x38009, 0x3800A, 0x3800B, 0x3800C, 0x3800D, 0x3800E, 0x3800F,
+ 0x38010, 0x38011, 0x38012, 0x38013, 0x38014, 0x38015, 0x38016, 0x38017,
+ 0x400C0, 0x400C1, 0x400C2, 0x400C3, 0x400C4, 0x400C5
+};
+
+const u32 fixed_d_sym[30] = {
+ 0x28000, 0x28001, 0x28002, 0x28003, 0x28004, 0x28005, 0x28006, 0x28007,
+ 0x28008, 0x28009, 0x2800A, 0x2800B, 0x2800C, 0x2800D, 0x2800E, 0x2800F,
+ 0x28010, 0x28011, 0x28012, 0x28013, 0x28014, 0x28015, 0x28016, 0x28017,
+ 0x28018, 0x28019, 0x2801A, 0x2801B, 0x2801C, 0x2801D
+};
+
+static int init_fixed_mode(struct iaa_device_compression_mode *mode)
+{
+ struct aecs_comp_table_record *comp_table = mode->aecs_comp_table;
+ u32 bfinal = 1;
+ u32 offset;
+
+ /* Configure aecs table using fixed Huffman table */
+ comp_table->crc = 0;
+ comp_table->xor_checksum = 0;
+ offset = comp_table->num_output_accum_bits / 8;
+ comp_table->output_accum[offset] = FIXED_HDR | bfinal;
+ comp_table->num_output_accum_bits = FIXED_HDR_SIZE;
+
+ return 0;
+}
+
+int iaa_aecs_init_fixed(void)
+{
+ int ret;
+
+ ret = add_iaa_compression_mode("fixed",
+ fixed_ll_sym,
+ sizeof(fixed_ll_sym),
+ fixed_d_sym,
+ sizeof(fixed_d_sym),
+ NULL, 0, 0,
+ init_fixed_mode, NULL);
+ if (!ret)
+ pr_debug("IAA fixed compression mode initialized\n");
+
+ return ret;
+}
+
+void iaa_aecs_cleanup_fixed(void)
+{
+ remove_iaa_compression_mode("fixed");
+}
diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
index 670e9c3c8f9a..0c59332456f0 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
+++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
@@ -66,6 +66,299 @@ static void wq_table_clear_entry(int cpu)
static LIST_HEAD(iaa_devices);
static DEFINE_MUTEX(iaa_devices_lock);

+static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX];
+
+static int find_empty_iaa_compression_mode(void)
+{
+ int i = -EINVAL;
+
+ for (i = 0; i < IAA_COMP_MODES_MAX; i++) {
+ if (iaa_compression_modes[i])
+ continue;
+ break;
+ }
+
+ return i;
+}
+
+static struct iaa_compression_mode *find_iaa_compression_mode(const char *name, int *idx)
+{
+ struct iaa_compression_mode *mode;
+ int i;
+
+ for (i = 0; i < IAA_COMP_MODES_MAX; i++) {
+ mode = iaa_compression_modes[i];
+ if (!mode)
+ continue;
+
+ if (!strcmp(mode->name, name)) {
+ *idx = i;
+ return iaa_compression_modes[i];
+ }
+ }
+
+ return NULL;
+}
+
+static void free_iaa_compression_mode(struct iaa_compression_mode *mode)
+{
+ kfree(mode->name);
+ kfree(mode->ll_table);
+ kfree(mode->d_table);
+ kfree(mode->header_table);
+
+ kfree(mode);
+}
+
+/*
+ * IAA Compression modes are defined by an ll_table, a d_table, and an
+ * optional header_table. These tables are typically generated and
+ * captured using statistics collected from running actual
+ * compress/decompress workloads.
+ *
+ * A module or other kernel code can add and remove compression modes
+ * with a given name using the exported @add_iaa_compression_mode()
+ * and @remove_iaa_compression_mode functions.
+ *
+ * When a new compression mode is added, the tables are saved in a
+ * global compression mode list. When IAA devices are added, a
+ * per-IAA device dma mapping is created for each IAA device, for each
+ * compression mode. These are the tables used to do the actual
+ * compression/deccompression and are unmapped if/when the devices are
+ * removed. Currently, compression modes must be added before any
+ * device is added, and removed after all devices have been removed.
+ */
+
+/**
+ * remove_iaa_compression_mode - Remove an IAA compression mode
+ * @name: The name the compression mode will be known as
+ *
+ * Remove the IAA compression mode named @name.
+ */
+void remove_iaa_compression_mode(const char *name)
+{
+ struct iaa_compression_mode *mode;
+ int idx;
+
+ mutex_lock(&iaa_devices_lock);
+
+ if (!list_empty(&iaa_devices))
+ goto out;
+
+ mode = find_iaa_compression_mode(name, &idx);
+ if (mode) {
+ free_iaa_compression_mode(mode);
+ iaa_compression_modes[idx] = NULL;
+ }
+out:
+ mutex_unlock(&iaa_devices_lock);
+}
+EXPORT_SYMBOL_GPL(remove_iaa_compression_mode);
+
+/**
+ * add_iaa_compression_mode - Add an IAA compression mode
+ * @name: The name the compression mode will be known as
+ * @ll_table: The ll table
+ * @ll_table_size: The ll table size in bytes
+ * @d_table: The d table
+ * @d_table_size: The d table size in bytes
+ * @header_table: Optional header table
+ * @header_table_size: Optional header table size in bytes
+ * @gen_decomp_table_flags: Otional flags used to generate the decomp table
+ * @init: Optional callback function to init the compression mode data
+ * @free: Optional callback function to free the compression mode data
+ *
+ * Add a new IAA compression mode named @name.
+ *
+ * Returns 0 if successful, errcode otherwise.
+ */
+int add_iaa_compression_mode(const char *name,
+ const u32 *ll_table,
+ int ll_table_size,
+ const u32 *d_table,
+ int d_table_size,
+ const u8 *header_table,
+ int header_table_size,
+ u16 gen_decomp_table_flags,
+ iaa_dev_comp_init_fn_t init,
+ iaa_dev_comp_free_fn_t free)
+{
+ struct iaa_compression_mode *mode;
+ int idx, ret = -ENOMEM;
+
+ mutex_lock(&iaa_devices_lock);
+
+ if (!list_empty(&iaa_devices)) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ mode = kzalloc(sizeof(*mode), GFP_KERNEL);
+ if (!mode)
+ goto out;
+
+ mode->name = kstrdup(name, GFP_KERNEL);
+ if (!mode->name)
+ goto free;
+
+ if (ll_table) {
+ mode->ll_table = kzalloc(ll_table_size, GFP_KERNEL);
+ if (!mode->ll_table)
+ goto free;
+ memcpy(mode->ll_table, ll_table, ll_table_size);
+ mode->ll_table_size = ll_table_size;
+ }
+
+ if (d_table) {
+ mode->d_table = kzalloc(d_table_size, GFP_KERNEL);
+ if (!mode->d_table)
+ goto free;
+ memcpy(mode->d_table, d_table, d_table_size);
+ mode->d_table_size = d_table_size;
+ }
+
+ if (header_table) {
+ mode->header_table = kzalloc(header_table_size, GFP_KERNEL);
+ if (!mode->header_table)
+ goto free;
+ memcpy(mode->header_table, header_table, header_table_size);
+ mode->header_table_size = header_table_size;
+ }
+
+ mode->gen_decomp_table_flags = gen_decomp_table_flags;
+
+ mode->init = init;
+ mode->free = free;
+
+ idx = find_empty_iaa_compression_mode();
+ if (idx < 0)
+ goto free;
+
+ pr_debug("IAA compression mode %s added at idx %d\n",
+ mode->name, idx);
+
+ iaa_compression_modes[idx] = mode;
+
+ ret = 0;
+out:
+ mutex_unlock(&iaa_devices_lock);
+
+ return ret;
+free:
+ free_iaa_compression_mode(mode);
+ goto out;
+}
+EXPORT_SYMBOL_GPL(add_iaa_compression_mode);
+
+static void free_device_compression_mode(struct iaa_device *iaa_device,
+ struct iaa_device_compression_mode *device_mode)
+{
+ size_t size = sizeof(struct aecs_comp_table_record) + IAA_AECS_ALIGN;
+ struct device *dev = &iaa_device->idxd->pdev->dev;
+
+ kfree(device_mode->name);
+
+ if (device_mode->aecs_comp_table)
+ dma_free_coherent(dev, size, device_mode->aecs_comp_table,
+ device_mode->aecs_comp_table_dma_addr);
+ if (device_mode->aecs_decomp_table)
+ dma_free_coherent(dev, size, device_mode->aecs_decomp_table,
+ device_mode->aecs_decomp_table_dma_addr);
+
+ kfree(device_mode);
+}
+
+static int init_device_compression_mode(struct iaa_device *iaa_device,
+ struct iaa_compression_mode *mode,
+ int idx, struct idxd_wq *wq)
+{
+ size_t size = sizeof(struct aecs_comp_table_record) + IAA_AECS_ALIGN;
+ struct device *dev = &iaa_device->idxd->pdev->dev;
+ struct iaa_device_compression_mode *device_mode;
+ int ret = -ENOMEM;
+
+ device_mode = kzalloc(sizeof(*device_mode), GFP_KERNEL);
+ if (!device_mode)
+ return -ENOMEM;
+
+ device_mode->name = kstrdup(mode->name, GFP_KERNEL);
+ if (!device_mode->name)
+ goto free;
+
+ device_mode->aecs_comp_table = dma_alloc_coherent(dev, size,
+ &device_mode->aecs_comp_table_dma_addr, GFP_KERNEL);
+ if (!device_mode->aecs_comp_table)
+ goto free;
+
+ device_mode->aecs_decomp_table = dma_alloc_coherent(dev, size,
+ &device_mode->aecs_decomp_table_dma_addr, GFP_KERNEL);
+ if (!device_mode->aecs_decomp_table)
+ goto free;
+
+ /* Add Huffman table to aecs */
+ memset(device_mode->aecs_comp_table, 0, sizeof(*device_mode->aecs_comp_table));
+ memcpy(device_mode->aecs_comp_table->ll_sym, mode->ll_table, mode->ll_table_size);
+ memcpy(device_mode->aecs_comp_table->d_sym, mode->d_table, mode->d_table_size);
+
+ if (mode->init) {
+ ret = mode->init(device_mode);
+ if (ret)
+ goto free;
+ }
+
+ /* mode index should match iaa_compression_modes idx */
+ iaa_device->compression_modes[idx] = device_mode;
+
+ pr_debug("IAA %s compression mode initialized for iaa device %d\n",
+ mode->name, iaa_device->idxd->id);
+
+ ret = 0;
+out:
+ return ret;
+free:
+ pr_debug("IAA %s compression mode initialization failed for iaa device %d\n",
+ mode->name, iaa_device->idxd->id);
+
+ free_device_compression_mode(iaa_device, device_mode);
+ goto out;
+}
+
+static int init_device_compression_modes(struct iaa_device *iaa_device,
+ struct idxd_wq *wq)
+{
+ struct iaa_compression_mode *mode;
+ int i, ret = 0;
+
+ for (i = 0; i < IAA_COMP_MODES_MAX; i++) {
+ mode = iaa_compression_modes[i];
+ if (!mode)
+ continue;
+
+ ret = init_device_compression_mode(iaa_device, mode, i, wq);
+ if (ret)
+ break;
+ }
+
+ return ret;
+}
+
+static void remove_device_compression_modes(struct iaa_device *iaa_device)
+{
+ struct iaa_device_compression_mode *device_mode;
+ int i;
+
+ for (i = 0; i < IAA_COMP_MODES_MAX; i++) {
+ device_mode = iaa_device->compression_modes[i];
+ if (!device_mode)
+ continue;
+
+ free_device_compression_mode(iaa_device, device_mode);
+ iaa_device->compression_modes[i] = NULL;
+ if (iaa_compression_modes[i]->free)
+ iaa_compression_modes[i]->free(device_mode);
+ }
+}
+
static struct iaa_device *iaa_device_alloc(void)
{
struct iaa_device *iaa_device;
@@ -120,8 +413,21 @@ static struct iaa_device *add_iaa_device(struct idxd_device *idxd)
return iaa_device;
}

+static int init_iaa_device(struct iaa_device *iaa_device, struct iaa_wq *iaa_wq)
+{
+ int ret = 0;
+
+ ret = init_device_compression_modes(iaa_device, iaa_wq->wq);
+ if (ret)
+ return ret;
+
+ return ret;
+}
+
static void del_iaa_device(struct iaa_device *iaa_device)
{
+ remove_device_compression_modes(iaa_device);
+
list_del(&iaa_device->list);

iaa_device_free(iaa_device);
@@ -276,6 +582,13 @@ static int save_iaa_wq(struct idxd_wq *wq)
del_iaa_device(new_device);
goto out;
}
+
+ ret = init_iaa_device(new_device, new_wq);
+ if (ret) {
+ del_iaa_wq(new_device, new_wq->wq);
+ del_iaa_device(new_device);
+ goto out;
+ }
}

if (WARN_ON(nr_iaa == 0))
@@ -519,20 +832,32 @@ static int __init iaa_crypto_init_module(void)
nr_nodes = num_online_nodes();
nr_cpus_per_node = nr_cpus / nr_nodes;

+ ret = iaa_aecs_init_fixed();
+ if (ret < 0) {
+ pr_debug("IAA fixed compression mode init failed\n");
+ goto out;
+ }
+
ret = idxd_driver_register(&iaa_crypto_driver);
if (ret) {
pr_debug("IAA wq sub-driver registration failed\n");
- goto out;
+ goto err_driver_reg;
}

pr_debug("initialized\n");
out:
return ret;
+
+err_driver_reg:
+ iaa_aecs_cleanup_fixed();
+
+ goto out;
}

static void __exit iaa_crypto_cleanup_module(void)
{
idxd_driver_unregister(&iaa_crypto_driver);
+ iaa_aecs_cleanup_fixed();

pr_debug("cleaned up\n");
}
--
2.34.1


2023-07-10 19:09:42

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v7 13/14] crypto: iaa - Add irq support for the crypto async interface

The existing iaa crypto async support provides an implementation that
satisfies the interface but does so in a synchronous manner - it fills
and submits the IDXD descriptor and then waits for it to complete
before returning. This isn't a problem at the moment, since all
existing callers (e.g. zswap) wrap any asynchronous callees in a
synchronous wrapper anyway.

This change makes the iaa crypto async implementation truly
asynchronous: it fills and submits the IDXD descriptor, then returns
immediately with -EINPROGRESS. It also sets the descriptor's 'request
completion irq' bit and sets up a callback with the IDXD driver which
is called when the operation completes and the irq fires. The
existing callers such as zswap use synchronous wrappers to deal with
-EINPROGRESS and so work as expected without any changes.

This mode can be enabled by writing 'async_irq' to the sync_mode
iaa_crypto driver attribute:

echo async_irq > /sys/bus/dsa/drivers/crypto/sync_mode

Async mode without interrupts (caller must poll) can be enabled by
writing 'async' to it:

echo async > /sys/bus/dsa/drivers/crypto/sync_mode

The default sync mode can be enabled by writing 'sync' to it:

echo sync > /sys/bus/dsa/drivers/crypto/sync_mode

The sync_mode value setting at the time the IAA algorithms are
registered is captured in each algorithm's crypto_ctx and used for all
compresses and decompresses when using a given algorithm.

Signed-off-by: Tom Zanussi <[email protected]>
---
drivers/crypto/intel/iaa/iaa_crypto.h | 2 +
drivers/crypto/intel/iaa/iaa_crypto_main.c | 236 ++++++++++++++++++++-
2 files changed, 236 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h
index 4c6b0f5a6b50..de014ac53adb 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto.h
+++ b/drivers/crypto/intel/iaa/iaa_crypto.h
@@ -153,6 +153,8 @@ enum iaa_mode {
struct iaa_compression_ctx {
enum iaa_mode mode;
bool verify_compress;
+ bool async_mode;
+ bool use_irq;
};

#endif
diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
index 9b4acc343582..02adf65186e8 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
+++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
@@ -115,6 +115,102 @@ static ssize_t verify_compress_store(struct device_driver *driver,
}
static DRIVER_ATTR_RW(verify_compress);

+/*
+ * The iaa crypto driver supports three 'sync' methods determining how
+ * compressions and decompressions are performed:
+ *
+ * - sync: the compression or decompression completes before
+ * returning. This is the mode used by the async crypto
+ * interface when the sync mode is set to 'sync' and by
+ * the sync crypto interface regardless of setting.
+ *
+ * - async: the compression or decompression is submitted and returns
+ * immediately. Completion interrupts are not used so
+ * the caller is responsible for polling the descriptor
+ * for completion. This mode is applicable to only the
+ * async crypto interface and is ignored for anything
+ * else.
+ *
+ * - async_irq: the compression or decompression is submitted and
+ * returns immediately. Completion interrupts are
+ * enabled so the caller can wait for the completion and
+ * yield to other threads. When the compression or
+ * decompression completes, the completion is signaled
+ * and the caller awakened. This mode is applicable to
+ * only the async crypto interface and is ignored for
+ * anything else.
+ *
+ * These modes can be set using the iaa_crypto sync_mode driver
+ * attribute.
+ */
+
+/* Use async mode */
+static bool async_mode;
+/* Use interrupts */
+static bool use_irq;
+
+/**
+ * set_iaa_sync_mode - Set IAA sync mode
+ * @name: The name of the sync mode
+ *
+ * Make the IAA sync mode named @name the current sync mode used by
+ * compression/decompression.
+ */
+
+static int set_iaa_sync_mode(const char *name)
+{
+ int ret = 0;
+
+ if (sysfs_streq(name, "sync")) {
+ async_mode = false;
+ use_irq = false;
+ } else if (sysfs_streq(name, "async")) {
+ async_mode = true;
+ use_irq = false;
+ } else if (sysfs_streq(name, "async_irq")) {
+ async_mode = true;
+ use_irq = true;
+ } else {
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static ssize_t sync_mode_show(struct device_driver *driver, char *buf)
+{
+ int ret = 0;
+
+ if (!async_mode && !use_irq)
+ ret = sprintf(buf, "%s\n", "sync");
+ else if (async_mode && !use_irq)
+ ret = sprintf(buf, "%s\n", "async");
+ else if (async_mode && use_irq)
+ ret = sprintf(buf, "%s\n", "async_irq");
+
+ return ret;
+}
+
+static ssize_t sync_mode_store(struct device_driver *driver,
+ const char *buf, size_t count)
+{
+ int ret = -EBUSY;
+
+ mutex_lock(&iaa_devices_lock);
+
+ if (iaa_crypto_enabled)
+ goto out;
+
+ ret = set_iaa_sync_mode(buf);
+ if (ret == 0)
+ ret = count;
+out:
+ mutex_unlock(&iaa_devices_lock);
+
+ return ret;
+}
+static DRIVER_ATTR_RW(sync_mode);
+
static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX];

static int find_empty_iaa_compression_mode(void)
@@ -976,6 +1072,81 @@ static inline int check_completion(struct device *dev,
return ret;
}

+static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req,
+ struct idxd_wq *wq,
+ dma_addr_t src_addr, unsigned int slen,
+ dma_addr_t dst_addr, unsigned int *dlen,
+ u32 compression_crc);
+
+static void iaa_desc_complete(struct idxd_desc *idxd_desc,
+ enum idxd_complete_type comp_type,
+ bool free_desc, void *__ctx,
+ u32 *status)
+{
+ struct iaa_device_compression_mode *active_compression_mode;
+ struct iaa_compression_ctx *compression_ctx;
+ struct crypto_ctx *ctx = __ctx;
+ struct iaa_device *iaa_device;
+ struct idxd_device *idxd;
+ struct iaa_wq *iaa_wq;
+ struct pci_dev *pdev;
+ struct device *dev;
+ int ret, err = 0;
+
+ compression_ctx = crypto_tfm_ctx(ctx->tfm);
+
+ iaa_wq = idxd_wq_get_private(idxd_desc->wq);
+ iaa_device = iaa_wq->iaa_device;
+ idxd = iaa_device->idxd;
+ pdev = idxd->pdev;
+ dev = &pdev->dev;
+
+ active_compression_mode = get_iaa_device_compression_mode(iaa_device,
+ compression_ctx->mode);
+ dev_dbg(dev, "%s: compression mode %s,"
+ " ctx->src_addr %llx, ctx->dst_addr %llx\n", __func__,
+ active_compression_mode->name,
+ ctx->src_addr, ctx->dst_addr);
+
+ ret = check_completion(dev, idxd_desc->iax_completion,
+ ctx->compress, false);
+ if (ret) {
+ dev_dbg(dev, "%s: check_completion failed ret=%d\n", __func__, ret);
+ err = -EIO;
+ goto err;
+ }
+
+ ctx->req->dlen = idxd_desc->iax_completion->output_size;
+
+ if (ctx->compress && compression_ctx->verify_compress) {
+ u32 compression_crc;
+
+ compression_crc = idxd_desc->iax_completion->crc;
+ dma_sync_sg_for_device(dev, ctx->req->dst, 1, DMA_FROM_DEVICE);
+ dma_sync_sg_for_device(dev, ctx->req->src, 1, DMA_TO_DEVICE);
+ ret = iaa_compress_verify(ctx->tfm, ctx->req, iaa_wq->wq, ctx->src_addr,
+ ctx->req->slen, ctx->dst_addr, &ctx->req->dlen,
+ compression_crc);
+ if (ret) {
+ dev_dbg(dev, "%s: compress verify failed ret=%d\n", __func__, ret);
+ err = -EIO;
+ }
+ }
+err:
+ if (ctx->req->base.complete)
+ acomp_request_complete(ctx->req, err);
+
+ dma_unmap_sg(dev, ctx->req->dst, sg_nents(ctx->req->dst), DMA_FROM_DEVICE);
+ dma_unmap_sg(dev, ctx->req->src, sg_nents(ctx->req->src), DMA_TO_DEVICE);
+
+ if (ret != 0)
+ dev_dbg(dev, "asynchronous compress failed ret=%d\n", ret);
+
+ if (free_desc)
+ idxd_free_desc(idxd_desc->wq, idxd_desc);
+ iaa_wq_put(idxd_desc->wq);
+}
+
static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
struct idxd_wq *wq,
dma_addr_t src_addr, unsigned int slen,
@@ -1024,6 +1195,22 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
desc->src2_size = sizeof(struct aecs_comp_table_record);
desc->completion_addr = idxd_desc->compl_dma;

+ if (ctx->use_irq) {
+ desc->flags |= IDXD_OP_FLAG_RCI;
+
+ idxd_desc->crypto.req = req;
+ idxd_desc->crypto.tfm = tfm;
+ idxd_desc->crypto.src_addr = src_addr;
+ idxd_desc->crypto.dst_addr = dst_addr;
+ idxd_desc->crypto.compress = true;
+
+ dev_dbg(dev, "%s use_async_irq: compression mode %s,"
+ " src_addr %llx, dst_addr %llx\n", __func__,
+ active_compression_mode->name,
+ src_addr, dst_addr);
+ } else if (ctx->async_mode && !disable_async)
+ req->base.data = idxd_desc;
+
dev_dbg(dev, "%s: compression mode %s,"
" desc->src1_addr %llx, desc->src1_size %d,"
" desc->dst_addr %llx, desc->max_dst_size %d,"
@@ -1038,6 +1225,12 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
goto err;
}

+ if (ctx->async_mode && !disable_async) {
+ ret = -EINPROGRESS;
+ dev_dbg(dev, "%s: returning -EINPROGRESS\n", __func__);
+ goto out;
+ }
+
ret = check_completion(dev, idxd_desc->iax_completion, true, false);
if (ret) {
dev_dbg(dev, "check_completion failed ret=%d\n", ret);
@@ -1048,7 +1241,8 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,

*compression_crc = idxd_desc->iax_completion->crc;

- idxd_free_desc(wq, idxd_desc);
+ if (!ctx->async_mode)
+ idxd_free_desc(wq, idxd_desc);
out:
return ret;
err:
@@ -1191,6 +1385,22 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,
desc->src1_size = slen;
desc->completion_addr = idxd_desc->compl_dma;

+ if (ctx->use_irq && !disable_async) {
+ desc->flags |= IDXD_OP_FLAG_RCI;
+
+ idxd_desc->crypto.req = req;
+ idxd_desc->crypto.tfm = tfm;
+ idxd_desc->crypto.src_addr = src_addr;
+ idxd_desc->crypto.dst_addr = dst_addr;
+ idxd_desc->crypto.compress = false;
+
+ dev_dbg(dev, "%s: use_async_irq compression mode %s,"
+ " src_addr %llx, dst_addr %llx\n", __func__,
+ active_compression_mode->name,
+ src_addr, dst_addr);
+ } else if (ctx->async_mode && !disable_async)
+ req->base.data = idxd_desc;
+
dev_dbg(dev, "%s: decompression mode %s,"
" desc->src1_addr %llx, desc->src1_size %d,"
" desc->dst_addr %llx, desc->max_dst_size %d,"
@@ -1205,6 +1415,12 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,
goto err;
}

+ if (ctx->async_mode && !disable_async) {
+ ret = -EINPROGRESS;
+ dev_dbg(dev, "%s: returning -EINPROGRESS\n", __func__);
+ goto out;
+ }
+
ret = check_completion(dev, idxd_desc->iax_completion, false, false);
if (ret) {
dev_dbg(dev, "check_completion failed ret=%d\n", ret);
@@ -1213,7 +1429,8 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,

*dlen = idxd_desc->iax_completion->output_size;

- idxd_free_desc(wq, idxd_desc);
+ if (!ctx->async_mode)
+ idxd_free_desc(wq, idxd_desc);
out:
return ret;
err:
@@ -1497,6 +1714,8 @@ static int iaa_comp_adecompress(struct acomp_req *req)
static void compression_ctx_init(struct iaa_compression_ctx *ctx)
{
ctx->verify_compress = iaa_verify_compress;
+ ctx->async_mode = async_mode;
+ ctx->use_irq = use_irq;
}

static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm)
@@ -1695,6 +1914,7 @@ static struct idxd_device_driver iaa_crypto_driver = {
.remove = iaa_crypto_remove,
.name = IDXD_SUBDRIVER_NAME,
.type = dev_types,
+ .desc_complete = iaa_desc_complete,
};

static int __init iaa_crypto_init_module(void)
@@ -1724,10 +1944,20 @@ static int __init iaa_crypto_init_module(void)
goto err_verify_attr_create;
}

+ ret = driver_create_file(&iaa_crypto_driver.drv,
+ &driver_attr_sync_mode);
+ if (ret) {
+ pr_debug("IAA sync mode attr creation failed\n");
+ goto err_sync_attr_create;
+ }
+
pr_debug("initialized\n");
out:
return ret;

+err_sync_attr_create:
+ driver_remove_file(&iaa_crypto_driver.drv,
+ &driver_attr_verify_compress);
err_verify_attr_create:
idxd_driver_unregister(&iaa_crypto_driver);
err_driver_reg:
@@ -1741,6 +1971,8 @@ static void __exit iaa_crypto_cleanup_module(void)
if (iaa_unregister_compression_device())
pr_debug("IAA compression device unregister failed\n");

+ driver_remove_file(&iaa_crypto_driver.drv,
+ &driver_attr_sync_mode);
driver_remove_file(&iaa_crypto_driver.drv,
&driver_attr_verify_compress);
idxd_driver_unregister(&iaa_crypto_driver);
--
2.34.1


2023-07-10 19:09:45

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v7 14/14] crypto: iaa - Add IAA Compression Accelerator stats

Add support for optional debugfs statistics support for the IAA
Compression Accelerator. This is enabled by the kernel config item:

CRYPTO_DEV_IAA_CRYPTO_STATS

When enabled, the IAA crypto driver will generate statistics which can
be accessed at /sys/kernel/debug/iaa-crypto/.

See Documentation/driver-api/crypto/iax/iax-crypto.rst for details.

Signed-off-by: Tom Zanussi <[email protected]>
---
drivers/crypto/intel/iaa/Kconfig | 9 +
drivers/crypto/intel/iaa/Makefile | 2 +
drivers/crypto/intel/iaa/iaa_crypto.h | 22 ++
drivers/crypto/intel/iaa/iaa_crypto_main.c | 65 +++++
drivers/crypto/intel/iaa/iaa_crypto_stats.c | 271 ++++++++++++++++++++
drivers/crypto/intel/iaa/iaa_crypto_stats.h | 58 +++++
6 files changed, 427 insertions(+)
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto_stats.c
create mode 100644 drivers/crypto/intel/iaa/iaa_crypto_stats.h

diff --git a/drivers/crypto/intel/iaa/Kconfig b/drivers/crypto/intel/iaa/Kconfig
index fcccb6ff7e29..d53f4b1d494f 100644
--- a/drivers/crypto/intel/iaa/Kconfig
+++ b/drivers/crypto/intel/iaa/Kconfig
@@ -8,3 +8,12 @@ config CRYPTO_DEV_IAA_CRYPTO
decompression with the Intel Analytics Accelerator (IAA)
hardware using the cryptographic API. If you choose 'M'
here, the module will be called iaa_crypto.
+
+config CRYPTO_DEV_IAA_CRYPTO_STATS
+ bool "Enable Intel(R) IAA Compression Accelerator Statistics"
+ depends on CRYPTO_DEV_IAA_CRYPTO
+ default n
+ help
+ Enable statistics for the IAA compression accelerator.
+ These include per-device and per-workqueue statistics in
+ addition to global driver statistics.
diff --git a/drivers/crypto/intel/iaa/Makefile b/drivers/crypto/intel/iaa/Makefile
index cc87feffd059..b64b208d2344 100644
--- a/drivers/crypto/intel/iaa/Makefile
+++ b/drivers/crypto/intel/iaa/Makefile
@@ -8,3 +8,5 @@ ccflags-y += -I $(srctree)/drivers/dma/idxd -DDEFAULT_SYMBOL_NAMESPACE=IDXD
obj-$(CONFIG_CRYPTO_DEV_IAA_CRYPTO) := iaa_crypto.o

iaa_crypto-y := iaa_crypto_main.o iaa_crypto_comp_fixed.o
+
+iaa_crypto-$(CONFIG_CRYPTO_DEV_IAA_CRYPTO_STATS) += iaa_crypto_stats.o
diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h
index de014ac53adb..e0fee392a43a 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto.h
+++ b/drivers/crypto/intel/iaa/iaa_crypto.h
@@ -48,6 +48,11 @@ struct iaa_wq {
bool remove;

struct iaa_device *iaa_device;
+
+ u64 comp_calls;
+ u64 comp_bytes;
+ u64 decomp_calls;
+ u64 decomp_bytes;
};

struct iaa_device_compression_mode {
@@ -69,6 +74,11 @@ struct iaa_device {

int n_wq;
struct list_head wqs;
+
+ u64 comp_calls;
+ u64 comp_bytes;
+ u64 decomp_calls;
+ u64 decomp_bytes;
};

struct wq_table_entry {
@@ -157,4 +167,16 @@ struct iaa_compression_ctx {
bool use_irq;
};

+#if defined(CONFIG_CRYPTO_DEV_IAA_CRYPTO_STATS)
+void global_stats_show(struct seq_file *m);
+void device_stats_show(struct seq_file *m, struct iaa_device *iaa_device);
+void reset_iaa_crypto_stats(void);
+void reset_device_stats(struct iaa_device *iaa_device);
+#else
+static inline void global_stats_show(struct seq_file *m) {}
+static inline void device_stats_show(struct seq_file *m, struct iaa_device *iaa_device) {}
+static inline void reset_iaa_crypto_stats(void) {}
+static inline void reset_device_stats(struct iaa_device *iaa_device) {}
+#endif
+
#endif
diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
index 02adf65186e8..17f3992ab450 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
+++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
@@ -14,6 +14,7 @@

#include "idxd.h"
#include "iaa_crypto.h"
+#include "iaa_crypto_stats.h"

#ifdef pr_fmt
#undef pr_fmt
@@ -1044,6 +1045,7 @@ static inline int check_completion(struct device *dev,
ret = -ETIMEDOUT;
dev_dbg(dev, "%s timed out, size=0x%x\n",
op_str, comp->output_size);
+ update_completion_timeout_errs();
goto out;
}

@@ -1053,6 +1055,7 @@ static inline int check_completion(struct device *dev,
dev_dbg(dev, "compressed > uncompressed size,"
" not compressing, size=0x%x\n",
comp->output_size);
+ update_completion_comp_buf_overflow_errs();
goto out;
}

@@ -1065,6 +1068,7 @@ static inline int check_completion(struct device *dev,
dev_dbg(dev, "iaa %s status=0x%x, error=0x%x, size=0x%x\n",
op_str, comp->status, comp->error_code, comp->output_size);
print_hex_dump(KERN_INFO, "cmp-rec: ", DUMP_PREFIX_OFFSET, 8, 1, comp, 64, 0);
+ update_completion_einval_errs();

goto out;
}
@@ -1118,6 +1122,15 @@ static void iaa_desc_complete(struct idxd_desc *idxd_desc,

ctx->req->dlen = idxd_desc->iax_completion->output_size;

+ /* Update stats */
+ if (ctx->compress) {
+ update_total_comp_bytes_out(ctx->req->dlen);
+ update_wq_comp_bytes(iaa_wq->wq, ctx->req->dlen);
+ } else {
+ update_total_decomp_bytes_in(ctx->req->dlen);
+ update_wq_decomp_bytes(iaa_wq->wq, ctx->req->dlen);
+ }
+
if (ctx->compress && compression_ctx->verify_compress) {
u32 compression_crc;

@@ -1225,6 +1238,10 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
goto err;
}

+ /* Update stats */
+ update_total_comp_calls();
+ update_wq_comp_calls(wq);
+
if (ctx->async_mode && !disable_async) {
ret = -EINPROGRESS;
dev_dbg(dev, "%s: returning -EINPROGRESS\n", __func__);
@@ -1239,6 +1256,10 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,

*dlen = idxd_desc->iax_completion->output_size;

+ /* Update stats */
+ update_total_comp_bytes_out(*dlen);
+ update_wq_comp_bytes(wq, *dlen);
+
*compression_crc = idxd_desc->iax_completion->crc;

if (!ctx->async_mode)
@@ -1415,6 +1436,10 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,
goto err;
}

+ /* Update stats */
+ update_total_decomp_calls();
+ update_wq_decomp_calls(wq);
+
if (ctx->async_mode && !disable_async) {
ret = -EINPROGRESS;
dev_dbg(dev, "%s: returning -EINPROGRESS\n", __func__);
@@ -1431,6 +1456,10 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,

if (!ctx->async_mode)
idxd_free_desc(wq, idxd_desc);
+
+ /* Update stats */
+ update_total_decomp_bytes_in(slen);
+ update_wq_decomp_bytes(wq, slen);
out:
return ret;
err:
@@ -1917,6 +1946,38 @@ static struct idxd_device_driver iaa_crypto_driver = {
.desc_complete = iaa_desc_complete,
};

+int wq_stats_show(struct seq_file *m, void *v)
+{
+ struct iaa_device *iaa_device;
+
+ mutex_lock(&iaa_devices_lock);
+
+ global_stats_show(m);
+
+ list_for_each_entry(iaa_device, &iaa_devices, list)
+ device_stats_show(m, iaa_device);
+
+ mutex_unlock(&iaa_devices_lock);
+
+ return 0;
+}
+
+int iaa_crypto_stats_reset(void *data, u64 value)
+{
+ struct iaa_device *iaa_device;
+
+ reset_iaa_crypto_stats();
+
+ mutex_lock(&iaa_devices_lock);
+
+ list_for_each_entry(iaa_device, &iaa_devices, list)
+ reset_device_stats(iaa_device);
+
+ mutex_unlock(&iaa_devices_lock);
+
+ return 0;
+}
+
static int __init iaa_crypto_init_module(void)
{
int ret = 0;
@@ -1951,6 +2012,9 @@ static int __init iaa_crypto_init_module(void)
goto err_sync_attr_create;
}

+ if (iaa_crypto_debugfs_init())
+ pr_warn("debugfs init failed, stats not available\n");
+
pr_debug("initialized\n");
out:
return ret;
@@ -1971,6 +2035,7 @@ static void __exit iaa_crypto_cleanup_module(void)
if (iaa_unregister_compression_device())
pr_debug("IAA compression device unregister failed\n");

+ iaa_crypto_debugfs_cleanup();
driver_remove_file(&iaa_crypto_driver.drv,
&driver_attr_sync_mode);
driver_remove_file(&iaa_crypto_driver.drv,
diff --git a/drivers/crypto/intel/iaa/iaa_crypto_stats.c b/drivers/crypto/intel/iaa/iaa_crypto_stats.c
new file mode 100644
index 000000000000..6ad2171bc5d9
--- /dev/null
+++ b/drivers/crypto/intel/iaa/iaa_crypto_stats.c
@@ -0,0 +1,271 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Intel Corporation. All rights rsvd. */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/highmem.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/smp.h>
+#include <uapi/linux/idxd.h>
+#include <linux/idxd.h>
+#include <linux/dmaengine.h>
+#include "../../dma/idxd/idxd.h"
+#include <linux/debugfs.h>
+#include <crypto/internal/acompress.h>
+#include "iaa_crypto.h"
+#include "iaa_crypto_stats.h"
+
+static u64 total_comp_calls;
+static u64 total_decomp_calls;
+static u64 max_comp_delay_ns;
+static u64 max_decomp_delay_ns;
+static u64 max_acomp_delay_ns;
+static u64 max_adecomp_delay_ns;
+static u64 total_comp_bytes_out;
+static u64 total_decomp_bytes_in;
+static u64 total_completion_einval_errors;
+static u64 total_completion_timeout_errors;
+static u64 total_completion_comp_buf_overflow_errors;
+
+static struct dentry *iaa_crypto_debugfs_root;
+
+void update_total_comp_calls(void)
+{
+ total_comp_calls++;
+}
+
+void update_total_comp_bytes_out(int n)
+{
+ total_comp_bytes_out += n;
+}
+
+void update_total_decomp_calls(void)
+{
+ total_decomp_calls++;
+}
+
+void update_total_decomp_bytes_in(int n)
+{
+ total_decomp_bytes_in += n;
+}
+
+void update_completion_einval_errs(void)
+{
+ total_completion_einval_errors++;
+}
+
+void update_completion_timeout_errs(void)
+{
+ total_completion_timeout_errors++;
+}
+
+void update_completion_comp_buf_overflow_errs(void)
+{
+ total_completion_comp_buf_overflow_errors++;
+}
+
+void update_max_comp_delay_ns(u64 start_time_ns)
+{
+ u64 time_diff;
+
+ time_diff = ktime_get_ns() - start_time_ns;
+
+ if (time_diff > max_comp_delay_ns)
+ max_comp_delay_ns = time_diff;
+}
+
+void update_max_decomp_delay_ns(u64 start_time_ns)
+{
+ u64 time_diff;
+
+ time_diff = ktime_get_ns() - start_time_ns;
+
+ if (time_diff > max_decomp_delay_ns)
+ max_decomp_delay_ns = time_diff;
+}
+
+void update_max_acomp_delay_ns(u64 start_time_ns)
+{
+ u64 time_diff;
+
+ time_diff = ktime_get_ns() - start_time_ns;
+
+ if (time_diff > max_acomp_delay_ns)
+ max_acomp_delay_ns = time_diff;
+}
+
+void update_max_adecomp_delay_ns(u64 start_time_ns)
+{
+ u64 time_diff;
+
+ time_diff = ktime_get_ns() - start_time_ns;
+
+ if (time_diff > max_adecomp_delay_ns)
+
+ max_adecomp_delay_ns = time_diff;
+}
+
+void update_wq_comp_calls(struct idxd_wq *idxd_wq)
+{
+ struct iaa_wq *wq = idxd_wq_get_private(idxd_wq);
+
+ wq->comp_calls++;
+ wq->iaa_device->comp_calls++;
+}
+
+void update_wq_comp_bytes(struct idxd_wq *idxd_wq, int n)
+{
+ struct iaa_wq *wq = idxd_wq_get_private(idxd_wq);
+
+ wq->comp_bytes += n;
+ wq->iaa_device->comp_bytes += n;
+}
+
+void update_wq_decomp_calls(struct idxd_wq *idxd_wq)
+{
+ struct iaa_wq *wq = idxd_wq_get_private(idxd_wq);
+
+ wq->decomp_calls++;
+ wq->iaa_device->decomp_calls++;
+}
+
+void update_wq_decomp_bytes(struct idxd_wq *idxd_wq, int n)
+{
+ struct iaa_wq *wq = idxd_wq_get_private(idxd_wq);
+
+ wq->decomp_bytes += n;
+ wq->iaa_device->decomp_bytes += n;
+}
+
+void reset_iaa_crypto_stats(void)
+{
+ total_comp_calls = 0;
+ total_decomp_calls = 0;
+ max_comp_delay_ns = 0;
+ max_decomp_delay_ns = 0;
+ max_acomp_delay_ns = 0;
+ max_adecomp_delay_ns = 0;
+ total_comp_bytes_out = 0;
+ total_decomp_bytes_in = 0;
+ total_completion_einval_errors = 0;
+ total_completion_timeout_errors = 0;
+ total_completion_comp_buf_overflow_errors = 0;
+}
+
+static void reset_wq_stats(struct iaa_wq *wq)
+{
+ wq->comp_calls = 0;
+ wq->comp_bytes = 0;
+ wq->decomp_calls = 0;
+ wq->decomp_bytes = 0;
+}
+
+void reset_device_stats(struct iaa_device *iaa_device)
+{
+ struct iaa_wq *iaa_wq;
+
+ iaa_device->comp_calls = 0;
+ iaa_device->comp_bytes = 0;
+ iaa_device->decomp_calls = 0;
+ iaa_device->decomp_bytes = 0;
+
+ list_for_each_entry(iaa_wq, &iaa_device->wqs, list)
+ reset_wq_stats(iaa_wq);
+}
+
+static void wq_show(struct seq_file *m, struct iaa_wq *iaa_wq)
+{
+ seq_printf(m, " name: %s\n", iaa_wq->wq->name);
+ seq_printf(m, " comp_calls: %llu\n", iaa_wq->comp_calls);
+ seq_printf(m, " comp_bytes: %llu\n", iaa_wq->comp_bytes);
+ seq_printf(m, " decomp_calls: %llu\n", iaa_wq->decomp_calls);
+ seq_printf(m, " decomp_bytes: %llu\n\n", iaa_wq->decomp_bytes);
+}
+
+void device_stats_show(struct seq_file *m, struct iaa_device *iaa_device)
+{
+ struct iaa_wq *iaa_wq;
+
+ seq_puts(m, "iaa device:\n");
+ seq_printf(m, " id: %d\n", iaa_device->idxd->id);
+ seq_printf(m, " n_wqs: %d\n", iaa_device->n_wq);
+ seq_printf(m, " comp_calls: %llu\n", iaa_device->comp_calls);
+ seq_printf(m, " comp_bytes: %llu\n", iaa_device->comp_bytes);
+ seq_printf(m, " decomp_calls: %llu\n", iaa_device->decomp_calls);
+ seq_printf(m, " decomp_bytes: %llu\n", iaa_device->decomp_bytes);
+ seq_puts(m, " wqs:\n");
+
+ list_for_each_entry(iaa_wq, &iaa_device->wqs, list)
+ wq_show(m, iaa_wq);
+}
+
+void global_stats_show(struct seq_file *m)
+{
+ seq_puts(m, "global stats:\n");
+ seq_printf(m, " total_comp_calls: %llu\n", total_comp_calls);
+ seq_printf(m, " total_decomp_calls: %llu\n", total_decomp_calls);
+ seq_printf(m, " total_comp_bytes_out: %llu\n", total_comp_bytes_out);
+ seq_printf(m, " total_decomp_bytes_in: %llu\n", total_decomp_bytes_in);
+ seq_printf(m, " total_completion_einval_errors: %llu\n",
+ total_completion_einval_errors);
+ seq_printf(m, " total_completion_timeout_errors: %llu\n",
+ total_completion_timeout_errors);
+ seq_printf(m, " total_completion_comp_buf_overflow_errors: %llu\n\n",
+ total_completion_comp_buf_overflow_errors);
+}
+
+static int wq_stats_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, wq_stats_show, file);
+}
+
+const struct file_operations wq_stats_fops = {
+ .open = wq_stats_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+DEFINE_DEBUGFS_ATTRIBUTE(wq_stats_reset_fops, NULL, iaa_crypto_stats_reset, "%llu\n");
+
+int __init iaa_crypto_debugfs_init(void)
+{
+ if (!debugfs_initialized())
+ return -ENODEV;
+
+ iaa_crypto_debugfs_root = debugfs_create_dir("iaa_crypto", NULL);
+ if (!iaa_crypto_debugfs_root)
+ return -ENOMEM;
+
+ debugfs_create_u64("max_comp_delay_ns", 0644,
+ iaa_crypto_debugfs_root, &max_comp_delay_ns);
+ debugfs_create_u64("max_decomp_delay_ns", 0644,
+ iaa_crypto_debugfs_root, &max_decomp_delay_ns);
+ debugfs_create_u64("max_acomp_delay_ns", 0644,
+ iaa_crypto_debugfs_root, &max_comp_delay_ns);
+ debugfs_create_u64("max_adecomp_delay_ns", 0644,
+ iaa_crypto_debugfs_root, &max_decomp_delay_ns);
+ debugfs_create_u64("total_comp_calls", 0644,
+ iaa_crypto_debugfs_root, &total_comp_calls);
+ debugfs_create_u64("total_decomp_calls", 0644,
+ iaa_crypto_debugfs_root, &total_decomp_calls);
+ debugfs_create_u64("total_comp_bytes_out", 0644,
+ iaa_crypto_debugfs_root, &total_comp_bytes_out);
+ debugfs_create_u64("total_decomp_bytes_in", 0644,
+ iaa_crypto_debugfs_root, &total_decomp_bytes_in);
+ debugfs_create_file("wq_stats", 0644, iaa_crypto_debugfs_root, NULL,
+ &wq_stats_fops);
+ debugfs_create_file("stats_reset", 0644, iaa_crypto_debugfs_root, NULL,
+ &wq_stats_reset_fops);
+
+ return 0;
+}
+
+void __exit iaa_crypto_debugfs_cleanup(void)
+{
+ debugfs_remove_recursive(iaa_crypto_debugfs_root);
+}
+
+MODULE_LICENSE("GPL");
diff --git a/drivers/crypto/intel/iaa/iaa_crypto_stats.h b/drivers/crypto/intel/iaa/iaa_crypto_stats.h
new file mode 100644
index 000000000000..ad8333329fa6
--- /dev/null
+++ b/drivers/crypto/intel/iaa/iaa_crypto_stats.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Intel Corporation. All rights rsvd. */
+
+#ifndef __CRYPTO_DEV_IAA_CRYPTO_STATS_H__
+#define __CRYPTO_DEV_IAA_CRYPTO_STATS_H__
+
+#if defined(CONFIG_CRYPTO_DEV_IAA_CRYPTO_STATS)
+int iaa_crypto_debugfs_init(void);
+void iaa_crypto_debugfs_cleanup(void);
+
+void update_total_comp_calls(void);
+void update_total_comp_bytes_out(int n);
+void update_total_decomp_calls(void);
+void update_total_decomp_bytes_in(int n);
+void update_max_comp_delay_ns(u64 start_time_ns);
+void update_max_decomp_delay_ns(u64 start_time_ns);
+void update_max_acomp_delay_ns(u64 start_time_ns);
+void update_max_adecomp_delay_ns(u64 start_time_ns);
+void update_completion_einval_errs(void);
+void update_completion_timeout_errs(void);
+void update_completion_comp_buf_overflow_errs(void);
+
+void update_wq_comp_calls(struct idxd_wq *idxd_wq);
+void update_wq_comp_bytes(struct idxd_wq *idxd_wq, int n);
+void update_wq_decomp_calls(struct idxd_wq *idxd_wq);
+void update_wq_decomp_bytes(struct idxd_wq *idxd_wq, int n);
+
+int wq_stats_show(struct seq_file *m, void *v);
+int iaa_crypto_stats_reset(void *data, u64 value);
+
+static inline u64 iaa_get_ts(void) { return ktime_get_ns(); }
+
+#else
+static inline int iaa_crypto_debugfs_init(void) { return 0; }
+static inline void iaa_crypto_debugfs_cleanup(void) {}
+
+static inline void update_total_comp_calls(void) {}
+static inline void update_total_comp_bytes_out(int n) {}
+static inline void update_total_decomp_calls(void) {}
+static inline void update_total_decomp_bytes_in(int n) {}
+static inline void update_max_comp_delay_ns(u64 start_time_ns) {}
+static inline void update_max_decomp_delay_ns(u64 start_time_ns) {}
+static inline void update_max_acomp_delay_ns(u64 start_time_ns) {}
+static inline void update_max_adecomp_delay_ns(u64 start_time_ns) {}
+static inline void update_completion_einval_errs(void) {}
+static inline void update_completion_timeout_errs(void) {}
+static inline void update_completion_comp_buf_overflow_errs(void) {}
+
+static inline void update_wq_comp_calls(struct idxd_wq *idxd_wq) {}
+static inline void update_wq_comp_bytes(struct idxd_wq *idxd_wq, int n) {}
+static inline void update_wq_decomp_calls(struct idxd_wq *idxd_wq) {}
+static inline void update_wq_decomp_bytes(struct idxd_wq *idxd_wq, int n) {}
+
+static inline u64 iaa_get_ts(void) { return 0; }
+
+#endif // CONFIG_CRYPTO_DEV_IAA_CRYPTO_STATS
+
+#endif
--
2.34.1


2023-07-10 19:10:02

by Tom Zanussi

[permalink] [raw]
Subject: [PATCH v7 12/14] crypto: iaa - Add support for deflate-iaa compression algorithm

This patch registers the deflate-iaa deflate compression algorithm and
hooks it up to the IAA hardware using the 'fixed' compression mode
introduced in the previous patch.

Because the IAA hardware has a 4k history-window limitation, only
buffers <= 4k, or that have been compressed using a <= 4k history
window, are technically compliant with the deflate spec, which allows
for a window of up to 32k. Because of this limitation, the IAA fixed
mode deflate algorithm is given its own algorithm name, 'deflate-iaa'.

With this change, the deflate-iaa crypto algorithm is registered and
operational, and compression and decompression operations are fully
enabled following the successful binding of the first IAA workqueue
to the iaa_crypto sub-driver.

when there are no IAA workqueues bound to the driver, the IAA crypto
algorithm can be unregistered by removing the module.

A new iaa_crypto 'verify_compress' driver attribute is also added,
allowing the user to toggle compression verification. If set, each
compress will be internally decompressed and the contents verified,
returning error codes if unsuccessful. This can be toggled with 0/1:

echo 0 > /sys/bus/dsa/drivers/crypto/verify_compress

The default setting is '1' - verify all compresses.

The verify_compress value setting at the time the algorithm is
registered is captured in the algorithm's crypto_ctx and used for all
compresses when using the algorithm.

[ Based on work originally by George Powley, Jing Lin and Kyung Min
Park ]

Signed-off-by: Tom Zanussi <[email protected]>
---
crypto/testmgr.c | 10 +
drivers/crypto/intel/iaa/iaa_crypto.h | 36 +
drivers/crypto/intel/iaa/iaa_crypto_main.c | 921 ++++++++++++++++++++-
3 files changed, 950 insertions(+), 17 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 216878c8bc3d..b6d924e0ff59 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -4819,6 +4819,16 @@ static const struct alg_test_desc alg_test_descs[] = {
.decomp = __VECS(deflate_decomp_tv_template)
}
}
+ }, {
+ .alg = "deflate-iaa",
+ .test = alg_test_comp,
+ .fips_allowed = 1,
+ .suite = {
+ .comp = {
+ .comp = __VECS(deflate_comp_tv_template),
+ .decomp = __VECS(deflate_decomp_tv_template)
+ }
+ }
}, {
.alg = "dh",
.test = alg_test_kpp,
diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h
index 33e68f9d3d02..4c6b0f5a6b50 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto.h
+++ b/drivers/crypto/intel/iaa/iaa_crypto.h
@@ -10,15 +10,42 @@

#define IDXD_SUBDRIVER_NAME "crypto"

+#define IAA_DECOMP_ENABLE BIT(0)
+#define IAA_DECOMP_FLUSH_OUTPUT BIT(1)
+#define IAA_DECOMP_CHECK_FOR_EOB BIT(2)
+#define IAA_DECOMP_STOP_ON_EOB BIT(3)
+#define IAA_DECOMP_SUPPRESS_OUTPUT BIT(9)
+
+#define IAA_COMP_FLUSH_OUTPUT BIT(1)
+#define IAA_COMP_APPEND_EOB BIT(2)
+
+#define IAA_COMPLETION_TIMEOUT 1000000
+
+#define IAA_ANALYTICS_ERROR 0x0a
+#define IAA_ERROR_DECOMP_BUF_OVERFLOW 0x0b
+#define IAA_ERROR_COMP_BUF_OVERFLOW 0x19
+#define IAA_ERROR_WATCHDOG_EXPIRED 0x24
+
#define IAA_COMP_MODES_MAX 2

#define FIXED_HDR 0x2
#define FIXED_HDR_SIZE 3

+#define IAA_COMP_FLAGS (IAA_COMP_FLUSH_OUTPUT | \
+ IAA_COMP_APPEND_EOB)
+
+#define IAA_DECOMP_FLAGS (IAA_DECOMP_ENABLE | \
+ IAA_DECOMP_FLUSH_OUTPUT | \
+ IAA_DECOMP_CHECK_FOR_EOB | \
+ IAA_DECOMP_STOP_ON_EOB)
+
/* Representation of IAA workqueue */
struct iaa_wq {
struct list_head list;
+
struct idxd_wq *wq;
+ int ref;
+ bool remove;

struct iaa_device *iaa_device;
};
@@ -119,4 +146,13 @@ int add_iaa_compression_mode(const char *name,

void remove_iaa_compression_mode(const char *name);

+enum iaa_mode {
+ IAA_MODE_FIXED,
+};
+
+struct iaa_compression_ctx {
+ enum iaa_mode mode;
+ bool verify_compress;
+};
+
#endif
diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
index 0c59332456f0..9b4acc343582 100644
--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
+++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
@@ -10,6 +10,7 @@
#include <uapi/linux/idxd.h>
#include <linux/highmem.h>
#include <linux/sched/smt.h>
+#include <crypto/internal/acompress.h>

#include "idxd.h"
#include "iaa_crypto.h"
@@ -32,6 +33,20 @@ static unsigned int cpus_per_iaa;
/* Per-cpu lookup table for balanced wqs */
static struct wq_table_entry __percpu *wq_table;

+static struct idxd_wq *wq_table_next_wq(int cpu)
+{
+ struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu);
+
+ if (++entry->cur_wq >= entry->n_wqs)
+ entry->cur_wq = 0;
+
+ pr_debug("%s: returning wq at idx %d (iaa wq %d.%d) from cpu %d\n", __func__,
+ entry->cur_wq, entry->wqs[entry->cur_wq]->idxd->id,
+ entry->wqs[entry->cur_wq]->id, cpu);
+
+ return entry->wqs[entry->cur_wq];
+}
+
static void wq_table_add(int cpu, struct idxd_wq *wq)
{
struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu);
@@ -66,6 +81,40 @@ static void wq_table_clear_entry(int cpu)
static LIST_HEAD(iaa_devices);
static DEFINE_MUTEX(iaa_devices_lock);

+/* If enabled, IAA hw crypto algos are registered, unavailable otherwise */
+static bool iaa_crypto_enabled;
+static bool iaa_crypto_registered;
+
+/* Verify results of IAA compress or not */
+static bool iaa_verify_compress = true;
+
+static ssize_t verify_compress_show(struct device_driver *driver, char *buf)
+{
+ return sprintf(buf, "%d\n", iaa_verify_compress);
+}
+
+static ssize_t verify_compress_store(struct device_driver *driver,
+ const char *buf, size_t count)
+{
+ int ret = -EBUSY;
+
+ mutex_lock(&iaa_devices_lock);
+
+ if (iaa_crypto_enabled)
+ goto out;
+
+ ret = kstrtobool(buf, &iaa_verify_compress);
+ if (ret)
+ goto out;
+
+ ret = count;
+out:
+ mutex_unlock(&iaa_devices_lock);
+
+ return ret;
+}
+static DRIVER_ATTR_RW(verify_compress);
+
static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX];

static int find_empty_iaa_compression_mode(void)
@@ -250,6 +299,12 @@ int add_iaa_compression_mode(const char *name,
}
EXPORT_SYMBOL_GPL(add_iaa_compression_mode);

+static struct iaa_device_compression_mode *
+get_iaa_device_compression_mode(struct iaa_device *iaa_device, int idx)
+{
+ return iaa_device->compression_modes[idx];
+}
+
static void free_device_compression_mode(struct iaa_device *iaa_device,
struct iaa_device_compression_mode *device_mode)
{
@@ -268,6 +323,86 @@ static void free_device_compression_mode(struct iaa_device *iaa_device,
kfree(device_mode);
}

+#define IDXD_OP_FLAG_AECS_RW_TGLS 0x400000
+#define IAX_AECS_DEFAULT_FLAG (IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CC)
+#define IAX_AECS_COMPRESS_FLAG (IAX_AECS_DEFAULT_FLAG | IDXD_OP_FLAG_RD_SRC2_AECS)
+#define IAX_AECS_DECOMPRESS_FLAG (IAX_AECS_DEFAULT_FLAG | IDXD_OP_FLAG_RD_SRC2_AECS)
+#define IAX_AECS_GEN_FLAG (IAX_AECS_DEFAULT_FLAG | \
+ IDXD_OP_FLAG_WR_SRC2_AECS_COMP | \
+ IDXD_OP_FLAG_AECS_RW_TGLS)
+
+static int check_completion(struct device *dev,
+ struct iax_completion_record *comp,
+ bool compress,
+ bool only_once);
+
+static int decompress_header(struct iaa_device_compression_mode *device_mode,
+ struct iaa_compression_mode *mode,
+ struct idxd_wq *wq)
+{
+ dma_addr_t src_addr, src2_addr;
+ struct idxd_desc *idxd_desc;
+ struct iax_hw_desc *desc;
+ struct device *dev;
+ int ret = 0;
+
+ idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK);
+ if (IS_ERR(idxd_desc))
+ return PTR_ERR(idxd_desc);
+
+ desc = idxd_desc->iax_hw;
+
+ dev = &wq->idxd->pdev->dev;
+
+ src_addr = dma_map_single(dev, (void *)mode->header_table,
+ mode->header_table_size, DMA_TO_DEVICE);
+ dev_dbg(dev, "%s: mode->name %s, src_addr %llx, dev %p, src %p, slen %d\n",
+ __func__, mode->name, src_addr, dev,
+ mode->header_table, mode->header_table_size);
+ if (unlikely(dma_mapping_error(dev, src_addr))) {
+ dev_dbg(dev, "dma_map_single err, exiting\n");
+ ret = -ENOMEM;
+ return ret;
+ }
+
+ desc->flags = IAX_AECS_GEN_FLAG;
+ desc->opcode = IAX_OPCODE_DECOMPRESS;
+
+ desc->src1_addr = (u64)src_addr;
+ desc->src1_size = mode->header_table_size;
+
+ src2_addr = device_mode->aecs_decomp_table_dma_addr;
+ desc->src2_addr = (u64)src2_addr;
+ desc->src2_size = 1088;
+ dev_dbg(dev, "%s: mode->name %s, src2_addr %llx, dev %p, src2_size %d\n",
+ __func__, mode->name, desc->src2_addr, dev, desc->src2_size);
+ desc->max_dst_size = 0; // suppressed output
+
+ desc->decompr_flags = mode->gen_decomp_table_flags;
+
+ desc->priv = 1;
+
+ desc->completion_addr = idxd_desc->compl_dma;
+
+ ret = idxd_submit_desc(wq, idxd_desc);
+ if (ret) {
+ pr_err("%s: submit_desc failed ret=0x%x\n", __func__, ret);
+ goto out;
+ }
+
+ ret = check_completion(dev, idxd_desc->iax_completion, false, false);
+ if (ret)
+ dev_dbg(dev, "%s: mode->name %s check_completion failed ret=%d\n",
+ __func__, mode->name, ret);
+ else
+ dev_dbg(dev, "%s: mode->name %s succeeded\n", __func__,
+ mode->name);
+out:
+ dma_unmap_single(dev, src_addr, 1088, DMA_TO_DEVICE);
+
+ return ret;
+}
+
static int init_device_compression_mode(struct iaa_device *iaa_device,
struct iaa_compression_mode *mode,
int idx, struct idxd_wq *wq)
@@ -300,6 +435,14 @@ static int init_device_compression_mode(struct iaa_device *iaa_device,
memcpy(device_mode->aecs_comp_table->ll_sym, mode->ll_table, mode->ll_table_size);
memcpy(device_mode->aecs_comp_table->d_sym, mode->d_table, mode->d_table_size);

+ if (mode->header_table) {
+ ret = decompress_header(device_mode, mode, wq);
+ if (ret) {
+ pr_debug("iaa header decompression failed: ret=%d\n", ret);
+ goto free;
+ }
+ }
+
if (mode->init) {
ret = mode->init(device_mode);
if (ret)
@@ -372,18 +515,6 @@ static struct iaa_device *iaa_device_alloc(void)
return iaa_device;
}

-static void iaa_device_free(struct iaa_device *iaa_device)
-{
- struct iaa_wq *iaa_wq, *next;
-
- list_for_each_entry_safe(iaa_wq, next, &iaa_device->wqs, list) {
- list_del(&iaa_wq->list);
- kfree(iaa_wq);
- }
-
- kfree(iaa_device);
-}
-
static bool iaa_has_wq(struct iaa_device *iaa_device, struct idxd_wq *wq)
{
struct iaa_wq *iaa_wq;
@@ -426,12 +557,8 @@ static int init_iaa_device(struct iaa_device *iaa_device, struct iaa_wq *iaa_wq)

static void del_iaa_device(struct iaa_device *iaa_device)
{
- remove_device_compression_modes(iaa_device);
-
list_del(&iaa_device->list);

- iaa_device_free(iaa_device);
-
nr_iaa--;
}

@@ -497,6 +624,82 @@ static void clear_wq_table(void)
pr_debug("cleared wq table\n");
}

+static void free_iaa_device(struct iaa_device *iaa_device)
+{
+ if (!iaa_device)
+ return;
+
+ remove_device_compression_modes(iaa_device);
+ kfree(iaa_device);
+}
+
+static void __free_iaa_wq(struct iaa_wq *iaa_wq)
+{
+ struct iaa_device *iaa_device;
+
+ if (!iaa_wq)
+ return;
+
+ iaa_device = iaa_wq->iaa_device;
+ if (iaa_device->n_wq == 0)
+ free_iaa_device(iaa_wq->iaa_device);
+}
+
+static void free_iaa_wq(struct iaa_wq *iaa_wq)
+{
+ struct idxd_wq *wq;
+
+ __free_iaa_wq(iaa_wq);
+
+ wq = iaa_wq->wq;
+
+ kfree(iaa_wq);
+ idxd_wq_set_private(wq, NULL);
+}
+
+static int iaa_wq_get(struct idxd_wq *wq)
+{
+ struct idxd_device *idxd = wq->idxd;
+ struct iaa_wq *iaa_wq;
+ int ret = 0;
+
+ spin_lock(&idxd->dev_lock);
+ iaa_wq = idxd_wq_get_private(wq);
+ if (iaa_wq && !iaa_wq->remove)
+ iaa_wq->ref++;
+ else
+ ret = -ENODEV;
+ spin_unlock(&idxd->dev_lock);
+
+ return ret;
+}
+
+static int iaa_wq_put(struct idxd_wq *wq)
+{
+ struct idxd_device *idxd = wq->idxd;
+ struct iaa_wq *iaa_wq;
+ bool free = false;
+ int ret = 0;
+
+ spin_lock(&idxd->dev_lock);
+ iaa_wq = idxd_wq_get_private(wq);
+ if (iaa_wq) {
+ iaa_wq->ref--;
+ if (iaa_wq->ref == 0 && iaa_wq->remove) {
+ __free_iaa_wq(iaa_wq);
+ idxd_wq_set_private(wq, NULL);
+ free = true;
+ }
+ } else {
+ ret = -ENODEV;
+ }
+ spin_unlock(&idxd->dev_lock);
+ if (free)
+ kfree(iaa_wq);
+
+ return ret;
+}
+
static void free_wq_table(void)
{
int cpu;
@@ -580,6 +783,7 @@ static int save_iaa_wq(struct idxd_wq *wq)
ret = add_iaa_wq(new_device, wq, &new_wq);
if (ret) {
del_iaa_device(new_device);
+ free_iaa_device(new_device);
goto out;
}

@@ -587,6 +791,7 @@ static int save_iaa_wq(struct idxd_wq *wq)
if (ret) {
del_iaa_wq(new_device, new_wq->wq);
del_iaa_device(new_device);
+ free_iaa_wq(new_wq);
goto out;
}
}
@@ -724,6 +929,624 @@ static void rebalance_wq_table(void)
}
}

+static inline int check_completion(struct device *dev,
+ struct iax_completion_record *comp,
+ bool compress,
+ bool only_once)
+{
+ char *op_str = compress ? "compress" : "decompress";
+ int ret = 0;
+
+ while (!comp->status) {
+ if (only_once)
+ return -EAGAIN;
+ cpu_relax();
+ }
+
+ if (comp->status != IAX_COMP_SUCCESS) {
+ if (comp->status == IAA_ERROR_WATCHDOG_EXPIRED) {
+ ret = -ETIMEDOUT;
+ dev_dbg(dev, "%s timed out, size=0x%x\n",
+ op_str, comp->output_size);
+ goto out;
+ }
+
+ if (comp->status == IAA_ANALYTICS_ERROR &&
+ comp->error_code == IAA_ERROR_COMP_BUF_OVERFLOW && compress) {
+ ret = -E2BIG;
+ dev_dbg(dev, "compressed > uncompressed size,"
+ " not compressing, size=0x%x\n",
+ comp->output_size);
+ goto out;
+ }
+
+ if (comp->status == IAA_ERROR_DECOMP_BUF_OVERFLOW) {
+ ret = -EOVERFLOW;
+ goto out;
+ }
+
+ ret = -EINVAL;
+ dev_dbg(dev, "iaa %s status=0x%x, error=0x%x, size=0x%x\n",
+ op_str, comp->status, comp->error_code, comp->output_size);
+ print_hex_dump(KERN_INFO, "cmp-rec: ", DUMP_PREFIX_OFFSET, 8, 1, comp, 64, 0);
+
+ goto out;
+ }
+out:
+ return ret;
+}
+
+static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
+ struct idxd_wq *wq,
+ dma_addr_t src_addr, unsigned int slen,
+ dma_addr_t dst_addr, unsigned int *dlen,
+ u32 *compression_crc,
+ bool disable_async)
+{
+ struct iaa_device_compression_mode *active_compression_mode;
+ struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct iaa_device *iaa_device;
+ struct idxd_desc *idxd_desc;
+ struct iax_hw_desc *desc;
+ struct idxd_device *idxd;
+ struct iaa_wq *iaa_wq;
+ struct pci_dev *pdev;
+ struct device *dev;
+ int ret = 0;
+
+ iaa_wq = idxd_wq_get_private(wq);
+ iaa_device = iaa_wq->iaa_device;
+ idxd = iaa_device->idxd;
+ pdev = idxd->pdev;
+ dev = &pdev->dev;
+
+ active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode);
+
+ idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK);
+ if (IS_ERR(idxd_desc)) {
+ dev_dbg(dev, "idxd descriptor allocation failed\n");
+ dev_dbg(dev, "iaa compress failed: ret=%ld\n", PTR_ERR(idxd_desc));
+ return PTR_ERR(idxd_desc);
+ }
+ desc = idxd_desc->iax_hw;
+
+ desc->flags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR |
+ IDXD_OP_FLAG_RD_SRC2_AECS | IDXD_OP_FLAG_CC;
+ desc->opcode = IAX_OPCODE_COMPRESS;
+ desc->compr_flags = IAA_COMP_FLAGS;
+ desc->priv = 1;
+
+ desc->src1_addr = (u64)src_addr;
+ desc->src1_size = slen;
+ desc->dst_addr = (u64)dst_addr;
+ desc->max_dst_size = *dlen;
+ desc->src2_addr = active_compression_mode->aecs_comp_table_dma_addr;
+ desc->src2_size = sizeof(struct aecs_comp_table_record);
+ desc->completion_addr = idxd_desc->compl_dma;
+
+ dev_dbg(dev, "%s: compression mode %s,"
+ " desc->src1_addr %llx, desc->src1_size %d,"
+ " desc->dst_addr %llx, desc->max_dst_size %d,"
+ " desc->src2_addr %llx, desc->src2_size %d\n", __func__,
+ active_compression_mode->name,
+ desc->src1_addr, desc->src1_size, desc->dst_addr,
+ desc->max_dst_size, desc->src2_addr, desc->src2_size);
+
+ ret = idxd_submit_desc(wq, idxd_desc);
+ if (ret) {
+ dev_dbg(dev, "submit_desc failed ret=%d\n", ret);
+ goto err;
+ }
+
+ ret = check_completion(dev, idxd_desc->iax_completion, true, false);
+ if (ret) {
+ dev_dbg(dev, "check_completion failed ret=%d\n", ret);
+ goto err;
+ }
+
+ *dlen = idxd_desc->iax_completion->output_size;
+
+ *compression_crc = idxd_desc->iax_completion->crc;
+
+ idxd_free_desc(wq, idxd_desc);
+out:
+ return ret;
+err:
+ idxd_free_desc(wq, idxd_desc);
+ dev_dbg(dev, "iaa compress failed: ret=%d\n", ret);
+
+ goto out;
+}
+
+static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req,
+ struct idxd_wq *wq,
+ dma_addr_t src_addr, unsigned int slen,
+ dma_addr_t dst_addr, unsigned int *dlen,
+ u32 compression_crc)
+{
+ struct iaa_device_compression_mode *active_compression_mode;
+ struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct iaa_device *iaa_device;
+ struct idxd_desc *idxd_desc;
+ struct iax_hw_desc *desc;
+ struct idxd_device *idxd;
+ struct iaa_wq *iaa_wq;
+ struct pci_dev *pdev;
+ struct device *dev;
+ int ret = 0;
+
+ iaa_wq = idxd_wq_get_private(wq);
+ iaa_device = iaa_wq->iaa_device;
+ idxd = iaa_device->idxd;
+ pdev = idxd->pdev;
+ dev = &pdev->dev;
+
+ active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode);
+
+ idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK);
+ if (IS_ERR(idxd_desc)) {
+ dev_dbg(dev, "idxd descriptor allocation failed\n");
+ dev_dbg(dev, "iaa compress failed: ret=%ld\n",
+ PTR_ERR(idxd_desc));
+ return PTR_ERR(idxd_desc);
+ }
+ desc = idxd_desc->iax_hw;
+
+ /* Verify (optional) - decompress and check crc, suppress dest write */
+
+ desc->flags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CC;
+ desc->opcode = IAX_OPCODE_DECOMPRESS;
+ desc->decompr_flags = IAA_DECOMP_FLAGS | IAA_DECOMP_SUPPRESS_OUTPUT;
+ desc->priv = 1;
+
+ desc->src1_addr = (u64)dst_addr;
+ desc->src1_size = *dlen;
+ desc->dst_addr = (u64)src_addr;
+ desc->max_dst_size = slen;
+ desc->completion_addr = idxd_desc->compl_dma;
+
+ dev_dbg(dev, "(verify) compression mode %s,"
+ " desc->src1_addr %llx, desc->src1_size %d,"
+ " desc->dst_addr %llx, desc->max_dst_size %d,"
+ " desc->src2_addr %llx, desc->src2_size %d\n",
+ active_compression_mode->name,
+ desc->src1_addr, desc->src1_size, desc->dst_addr,
+ desc->max_dst_size, desc->src2_addr, desc->src2_size);
+
+ ret = idxd_submit_desc(wq, idxd_desc);
+ if (ret) {
+ dev_dbg(dev, "submit_desc (verify) failed ret=%d\n", ret);
+ goto err;
+ }
+
+ ret = check_completion(dev, idxd_desc->iax_completion, false, false);
+ if (ret) {
+ dev_dbg(dev, "(verify) check_completion failed ret=%d\n", ret);
+ goto err;
+ }
+
+ if (compression_crc != idxd_desc->iax_completion->crc) {
+ ret = -EINVAL;
+ dev_dbg(dev, "(verify) iaa comp/decomp crc mismatch:"
+ " comp=0x%x, decomp=0x%x\n", compression_crc,
+ idxd_desc->iax_completion->crc);
+ print_hex_dump(KERN_INFO, "cmp-rec: ", DUMP_PREFIX_OFFSET,
+ 8, 1, idxd_desc->iax_completion, 64, 0);
+ goto err;
+ }
+
+ idxd_free_desc(wq, idxd_desc);
+out:
+ return ret;
+err:
+ idxd_free_desc(wq, idxd_desc);
+ dev_dbg(dev, "iaa compress failed: ret=%d\n", ret);
+
+ goto out;
+}
+
+static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,
+ struct idxd_wq *wq,
+ dma_addr_t src_addr, unsigned int slen,
+ dma_addr_t dst_addr, unsigned int *dlen,
+ bool disable_async)
+{
+ struct iaa_device_compression_mode *active_compression_mode;
+ struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct iaa_device *iaa_device;
+ struct idxd_desc *idxd_desc;
+ struct iax_hw_desc *desc;
+ struct idxd_device *idxd;
+ struct iaa_wq *iaa_wq;
+ struct pci_dev *pdev;
+ struct device *dev;
+ int ret = 0;
+
+ iaa_wq = idxd_wq_get_private(wq);
+ iaa_device = iaa_wq->iaa_device;
+ idxd = iaa_device->idxd;
+ pdev = idxd->pdev;
+ dev = &pdev->dev;
+
+ active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode);
+
+ idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK);
+ if (IS_ERR(idxd_desc)) {
+ dev_dbg(dev, "idxd descriptor allocation failed\n");
+ dev_dbg(dev, "iaa decompress failed: ret=%ld\n",
+ PTR_ERR(idxd_desc));
+ return PTR_ERR(idxd_desc);
+ }
+ desc = idxd_desc->iax_hw;
+
+ desc->flags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CC;
+ desc->opcode = IAX_OPCODE_DECOMPRESS;
+ desc->max_dst_size = PAGE_SIZE;
+ desc->decompr_flags = IAA_DECOMP_FLAGS;
+ desc->priv = 1;
+
+ desc->src1_addr = (u64)src_addr;
+ desc->dst_addr = (u64)dst_addr;
+ desc->max_dst_size = *dlen;
+ desc->src1_size = slen;
+ desc->completion_addr = idxd_desc->compl_dma;
+
+ dev_dbg(dev, "%s: decompression mode %s,"
+ " desc->src1_addr %llx, desc->src1_size %d,"
+ " desc->dst_addr %llx, desc->max_dst_size %d,"
+ " desc->src2_addr %llx, desc->src2_size %d\n", __func__,
+ active_compression_mode->name,
+ desc->src1_addr, desc->src1_size, desc->dst_addr,
+ desc->max_dst_size, desc->src2_addr, desc->src2_size);
+
+ ret = idxd_submit_desc(wq, idxd_desc);
+ if (ret) {
+ dev_dbg(dev, "submit_desc failed ret=%d\n", ret);
+ goto err;
+ }
+
+ ret = check_completion(dev, idxd_desc->iax_completion, false, false);
+ if (ret) {
+ dev_dbg(dev, "check_completion failed ret=%d\n", ret);
+ goto err;
+ }
+
+ *dlen = idxd_desc->iax_completion->output_size;
+
+ idxd_free_desc(wq, idxd_desc);
+out:
+ return ret;
+err:
+ idxd_free_desc(wq, idxd_desc);
+ dev_dbg(dev, "iaa decompress failed: ret=%d\n", ret);
+
+ goto out;
+}
+
+static int iaa_comp_acompress(struct acomp_req *req)
+{
+ struct iaa_compression_ctx *compression_ctx;
+ struct crypto_tfm *tfm = req->base.tfm;
+ dma_addr_t src_addr, dst_addr;
+ int nr_sgs, cpu, ret = 0;
+ struct iaa_wq *iaa_wq;
+ u32 compression_crc;
+ struct idxd_wq *wq;
+ struct device *dev;
+
+ compression_ctx = crypto_tfm_ctx(tfm);
+
+ if (!iaa_crypto_enabled) {
+ pr_debug("iaa_crypto disabled, not compressing\n");
+ return -ENODEV;
+ }
+
+ if (!req->src || !req->slen) {
+ pr_debug("invalid src, not compressing\n");
+ return -EINVAL;
+ }
+
+ cpu = get_cpu();
+ wq = wq_table_next_wq(cpu);
+ put_cpu();
+ if (!wq) {
+ pr_debug("no wq configured for cpu=%d\n", cpu);
+ return -ENODEV;
+ }
+
+ ret = iaa_wq_get(wq);
+ if (ret) {
+ pr_debug("no wq available for cpu=%d\n", cpu);
+ return -ENODEV;
+ }
+
+ iaa_wq = idxd_wq_get_private(wq);
+
+ if (!req->dst) {
+ gfp_t flags = req->flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
+ /* incompressible data will always be < 2 * slen */
+ req->dlen = 2 * req->slen;
+ req->dst = sgl_alloc(req->dlen, flags, NULL);
+ if (!req->dst) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ }
+
+ dev = &wq->idxd->pdev->dev;
+
+ nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
+ if (nr_sgs <= 0 || nr_sgs > 1) {
+ dev_dbg(dev, "couldn't map src sg for iaa device %d,"
+ " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
+ iaa_wq->wq->id, ret);
+ ret = -EIO;
+ goto out;
+ }
+ src_addr = sg_dma_address(req->src);
+ dev_dbg(dev, "dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p,"
+ " req->slen %d, sg_dma_len(sg) %d\n", src_addr, nr_sgs,
+ req->src, req->slen, sg_dma_len(req->src));
+
+ nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
+ if (nr_sgs <= 0 || nr_sgs > 1) {
+ dev_dbg(dev, "couldn't map dst sg for iaa device %d,"
+ " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
+ iaa_wq->wq->id, ret);
+ ret = -EIO;
+ goto err_map_dst;
+ }
+ dst_addr = sg_dma_address(req->dst);
+ dev_dbg(dev, "dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p,"
+ " req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs,
+ req->dst, req->dlen, sg_dma_len(req->dst));
+
+ ret = iaa_compress(tfm, req, wq, src_addr, req->slen, dst_addr,
+ &req->dlen, &compression_crc, false);
+ if (ret == -EINPROGRESS)
+ return ret;
+
+ if (!ret && compression_ctx->verify_compress) {
+ dma_sync_sg_for_device(dev, req->dst, 1, DMA_FROM_DEVICE);
+ dma_sync_sg_for_device(dev, req->src, 1, DMA_TO_DEVICE);
+ ret = iaa_compress_verify(tfm, req, wq, src_addr, req->slen,
+ dst_addr, &req->dlen, compression_crc);
+ }
+
+ if (ret)
+ dev_dbg(dev, "asynchronous compress failed ret=%d\n", ret);
+
+ dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
+err_map_dst:
+ dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
+out:
+ iaa_wq_put(wq);
+
+ return ret;
+}
+
+static int iaa_comp_adecompress_alloc_dest(struct acomp_req *req)
+{
+ gfp_t flags = req->flags & CRYPTO_TFM_REQ_MAY_SLEEP ?
+ GFP_KERNEL : GFP_ATOMIC;
+ struct crypto_tfm *tfm = req->base.tfm;
+ dma_addr_t src_addr, dst_addr;
+ int nr_sgs, cpu, ret = 0;
+ struct iaa_wq *iaa_wq;
+ struct device *dev;
+ struct idxd_wq *wq;
+
+ cpu = get_cpu();
+ wq = wq_table_next_wq(cpu);
+ put_cpu();
+ if (!wq) {
+ pr_debug("no wq configured for cpu=%d\n", cpu);
+ return -ENODEV;
+ }
+
+ ret = iaa_wq_get(wq);
+ if (ret) {
+ pr_debug("no wq available for cpu=%d\n", cpu);
+ return -ENODEV;
+ }
+
+ iaa_wq = idxd_wq_get_private(wq);
+
+ dev = &wq->idxd->pdev->dev;
+
+ nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
+ if (nr_sgs <= 0 || nr_sgs > 1) {
+ dev_dbg(dev, "couldn't map src sg for iaa device %d,"
+ " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
+ iaa_wq->wq->id, ret);
+ ret = -EIO;
+ goto out;
+ }
+ src_addr = sg_dma_address(req->src);
+ dev_dbg(dev, "dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p,"
+ " req->slen %d, sg_dma_len(sg) %d\n", src_addr, nr_sgs,
+ req->src, req->slen, sg_dma_len(req->src));
+
+ req->dlen = 4 * req->slen; /* start with ~avg comp rato */
+alloc_dest:
+ req->dst = sgl_alloc(req->dlen, flags, NULL);
+ if (!req->dst) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
+ if (nr_sgs <= 0 || nr_sgs > 1) {
+ dev_dbg(dev, "couldn't map dst sg for iaa device %d,"
+ " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
+ iaa_wq->wq->id, ret);
+ ret = -EIO;
+ goto err_map_dst;
+ }
+
+ dst_addr = sg_dma_address(req->dst);
+ dev_dbg(dev, "dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p,"
+ " req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs,
+ req->dst, req->dlen, sg_dma_len(req->dst));
+ ret = iaa_decompress(tfm, req, wq, src_addr, req->slen,
+ dst_addr, &req->dlen, true);
+ if (ret == -EOVERFLOW) {
+ dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
+ sgl_free(req->dst);
+ req->dlen *= 2;
+ if (req->dlen > CRYPTO_ACOMP_DST_MAX)
+ goto err_map_dst;
+ goto alloc_dest;
+ }
+
+ if (ret != 0)
+ dev_dbg(dev, "asynchronous decompress failed ret=%d\n", ret);
+
+ dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
+err_map_dst:
+ dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
+out:
+ iaa_wq_put(wq);
+
+ return ret;
+}
+
+static int iaa_comp_adecompress(struct acomp_req *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ dma_addr_t src_addr, dst_addr;
+ int nr_sgs, cpu, ret = 0;
+ struct iaa_wq *iaa_wq;
+ struct device *dev;
+ struct idxd_wq *wq;
+
+ if (!iaa_crypto_enabled) {
+ pr_debug("iaa_crypto disabled, not decompressing\n");
+ return -ENODEV;
+ }
+
+ if (!req->src || !req->slen) {
+ pr_debug("invalid src, not decompressing\n");
+ return -EINVAL;
+ }
+
+ if (!req->dst)
+ return iaa_comp_adecompress_alloc_dest(req);
+
+ cpu = get_cpu();
+ wq = wq_table_next_wq(cpu);
+ put_cpu();
+ if (!wq) {
+ pr_debug("no wq configured for cpu=%d\n", cpu);
+ return -ENODEV;
+ }
+
+ ret = iaa_wq_get(wq);
+ if (ret) {
+ pr_debug("no wq available for cpu=%d\n", cpu);
+ return -ENODEV;
+ }
+
+ iaa_wq = idxd_wq_get_private(wq);
+
+ dev = &wq->idxd->pdev->dev;
+
+ nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
+ if (nr_sgs <= 0 || nr_sgs > 1) {
+ dev_dbg(dev, "couldn't map src sg for iaa device %d,"
+ " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
+ iaa_wq->wq->id, ret);
+ ret = -EIO;
+ goto out;
+ }
+ src_addr = sg_dma_address(req->src);
+ dev_dbg(dev, "dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p,"
+ " req->slen %d, sg_dma_len(sg) %d\n", src_addr, nr_sgs,
+ req->src, req->slen, sg_dma_len(req->src));
+
+ nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
+ if (nr_sgs <= 0 || nr_sgs > 1) {
+ dev_dbg(dev, "couldn't map dst sg for iaa device %d,"
+ " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
+ iaa_wq->wq->id, ret);
+ ret = -EIO;
+ goto err_map_dst;
+ }
+ dst_addr = sg_dma_address(req->dst);
+ dev_dbg(dev, "dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p,"
+ " req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs,
+ req->dst, req->dlen, sg_dma_len(req->dst));
+
+ ret = iaa_decompress(tfm, req, wq, src_addr, req->slen,
+ dst_addr, &req->dlen, false);
+ if (ret == -EINPROGRESS)
+ return ret;
+
+ if (ret != 0)
+ dev_dbg(dev, "asynchronous decompress failed ret=%d\n", ret);
+
+ dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
+err_map_dst:
+ dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
+out:
+ iaa_wq_put(wq);
+
+ return ret;
+}
+
+static void compression_ctx_init(struct iaa_compression_ctx *ctx)
+{
+ ctx->verify_compress = iaa_verify_compress;
+}
+
+static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm)
+{
+ struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm);
+ struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ compression_ctx_init(ctx);
+
+ ctx->mode = IAA_MODE_FIXED;
+
+ return 0;
+}
+
+static struct acomp_alg iaa_acomp_fixed_deflate = {
+ .init = iaa_comp_init_fixed,
+ .compress = iaa_comp_acompress,
+ .decompress = iaa_comp_adecompress,
+ .dst_free = sgl_free,
+ .base = {
+ .cra_name = "deflate-iaa",
+ .cra_driver_name = "deflate_iaa",
+ .cra_ctxsize = sizeof(struct iaa_compression_ctx),
+ .cra_module = THIS_MODULE,
+ }
+};
+
+static int iaa_register_compression_device(void)
+{
+ int ret;
+
+ ret = crypto_register_acomp(&iaa_acomp_fixed_deflate);
+ if (ret) {
+ pr_err("deflate algorithm acomp fixed registration failed (%d)\n", ret);
+ goto out;
+ }
+
+ iaa_crypto_registered = true;
+out:
+ return ret;
+}
+
+static int iaa_unregister_compression_device(void)
+{
+ if (iaa_crypto_registered)
+ crypto_unregister_acomp(&iaa_acomp_fixed_deflate);
+
+ return 0;
+}
+
static int iaa_crypto_probe(struct idxd_dev *idxd_dev)
{
struct idxd_wq *wq = idxd_dev_to_wq(idxd_dev);
@@ -741,6 +1564,11 @@ static int iaa_crypto_probe(struct idxd_dev *idxd_dev)

mutex_lock(&wq->wq_lock);

+ if (idxd_wq_get_private(wq)) {
+ mutex_unlock(&wq->wq_lock);
+ return -EBUSY;
+ }
+
if (!idxd_wq_driver_name_match(wq, dev)) {
dev_dbg(dev, "wq %d.%d driver_name match failed: wq driver_name %s, dev driver name %s\n",
idxd->id, wq->id, wq->driver_name, dev->driver->name);
@@ -774,12 +1602,28 @@ static int iaa_crypto_probe(struct idxd_dev *idxd_dev)

rebalance_wq_table();

+ if (first_wq) {
+ iaa_crypto_enabled = true;
+ ret = iaa_register_compression_device();
+ if (ret != 0) {
+ iaa_crypto_enabled = false;
+ dev_dbg(dev, "IAA compression device registration failed\n");
+ goto err_register;
+ }
+ try_module_get(THIS_MODULE);
+
+ pr_info("iaa_crypto now ENABLED\n");
+ }
+
mutex_unlock(&iaa_devices_lock);
out:
mutex_unlock(&wq->wq_lock);

return ret;

+err_register:
+ remove_iaa_wq(wq);
+ free_iaa_wq(idxd_wq_get_private(wq));
err_save:
if (first_wq)
free_wq_table();
@@ -795,6 +1639,9 @@ static int iaa_crypto_probe(struct idxd_dev *idxd_dev)
static void iaa_crypto_remove(struct idxd_dev *idxd_dev)
{
struct idxd_wq *wq = idxd_dev_to_wq(idxd_dev);
+ struct idxd_device *idxd = wq->idxd;
+ struct iaa_wq *iaa_wq;
+ bool free = false;

idxd_wq_quiesce(wq);

@@ -802,11 +1649,37 @@ static void iaa_crypto_remove(struct idxd_dev *idxd_dev)
mutex_lock(&iaa_devices_lock);

remove_iaa_wq(wq);
+
+ spin_lock(&idxd->dev_lock);
+ iaa_wq = idxd_wq_get_private(wq);
+ if (!iaa_wq) {
+ pr_err("%s: no iaa_wq available to remove\n", __func__);
+ return;
+ }
+
+ if (iaa_wq->ref) {
+ iaa_wq->remove = true;
+ } else {
+ wq = iaa_wq->wq;
+ __free_iaa_wq(iaa_wq);
+ idxd_wq_set_private(wq, NULL);
+ free = true;
+ }
+ spin_unlock(&idxd->dev_lock);
+
+ if (free)
+ kfree(iaa_wq);
+
drv_disable_wq(wq);
rebalance_wq_table();

- if (nr_iaa == 0)
+ if (nr_iaa == 0) {
+ iaa_crypto_enabled = false;
free_wq_table();
+ module_put(THIS_MODULE);
+
+ pr_info("iaa_crypto now DISABLED\n");
+ }

mutex_unlock(&iaa_devices_lock);
mutex_unlock(&wq->wq_lock);
@@ -844,10 +1717,19 @@ static int __init iaa_crypto_init_module(void)
goto err_driver_reg;
}

+ ret = driver_create_file(&iaa_crypto_driver.drv,
+ &driver_attr_verify_compress);
+ if (ret) {
+ pr_debug("IAA verify_compress attr creation failed\n");
+ goto err_verify_attr_create;
+ }
+
pr_debug("initialized\n");
out:
return ret;

+err_verify_attr_create:
+ idxd_driver_unregister(&iaa_crypto_driver);
err_driver_reg:
iaa_aecs_cleanup_fixed();

@@ -856,6 +1738,11 @@ static int __init iaa_crypto_init_module(void)

static void __exit iaa_crypto_cleanup_module(void)
{
+ if (iaa_unregister_compression_device())
+ pr_debug("IAA compression device unregister failed\n");
+
+ driver_remove_file(&iaa_crypto_driver.drv,
+ &driver_attr_verify_compress);
idxd_driver_unregister(&iaa_crypto_driver);
iaa_aecs_cleanup_fixed();

--
2.34.1


2023-07-12 23:18:07

by Fenghua Yu

[permalink] [raw]
Subject: RE: [PATCH v7 06/14] dmaengine: idxd: Add wq private data accessors

> From: Tom Zanussi <[email protected]>
> Add the accessors idxd_wq_set_private() and idxd_wq_get_private() allowing
> users to set and retrieve a private void * associated with an idxd_wq.
>
> The private data is stored in the idxd_dev.conf_dev associated with each idxd_wq.
>
> Signed-off-by: Tom Zanussi <[email protected]>

Reviewed-by: Fenghua Yu <[email protected]>

Thanks.

-Fenghua

2023-07-12 23:28:43

by Fenghua Yu

[permalink] [raw]
Subject: RE: [PATCH v7 02/14] dmaengine: idxd: add external module driver support for dsa_bus_type

> From: Tom Zanussi <[email protected]>
> From: Dave Jiang <[email protected]>
>
> Add support to allow an external driver to be registered to the dsa_bus_type and
> also auto-loaded.
>
> Signed-off-by: Dave Jiang <[email protected]>
> Signed-off-by: Tom Zanussi <[email protected]>

Reviewed-by: Fenghua Yu <[email protected]>

Thanks.

-Fenghua

2023-07-17 02:23:37

by Rex Zhang

[permalink] [raw]
Subject: Re: [PATCH v7 12/14] crypto: iaa - Add support for deflate-iaa compression algorithm

Hi, Tom,

On 2023-07-10 at 14:06:52 -0500, Tom Zanussi wrote:
> This patch registers the deflate-iaa deflate compression algorithm and
> hooks it up to the IAA hardware using the 'fixed' compression mode
> introduced in the previous patch.
>
> Because the IAA hardware has a 4k history-window limitation, only
> buffers <= 4k, or that have been compressed using a <= 4k history
> window, are technically compliant with the deflate spec, which allows
> for a window of up to 32k. Because of this limitation, the IAA fixed
> mode deflate algorithm is given its own algorithm name, 'deflate-iaa'.
>
> With this change, the deflate-iaa crypto algorithm is registered and
> operational, and compression and decompression operations are fully
> enabled following the successful binding of the first IAA workqueue
> to the iaa_crypto sub-driver.
>
> when there are no IAA workqueues bound to the driver, the IAA crypto
> algorithm can be unregistered by removing the module.
>
> A new iaa_crypto 'verify_compress' driver attribute is also added,
> allowing the user to toggle compression verification. If set, each
> compress will be internally decompressed and the contents verified,
> returning error codes if unsuccessful. This can be toggled with 0/1:
>
> echo 0 > /sys/bus/dsa/drivers/crypto/verify_compress
>
> The default setting is '1' - verify all compresses.
>
> The verify_compress value setting at the time the algorithm is
> registered is captured in the algorithm's crypto_ctx and used for all
> compresses when using the algorithm.
>
> [ Based on work originally by George Powley, Jing Lin and Kyung Min
> Park ]
>
> Signed-off-by: Tom Zanussi <[email protected]>
> ---
> crypto/testmgr.c | 10 +
> drivers/crypto/intel/iaa/iaa_crypto.h | 36 +
> drivers/crypto/intel/iaa/iaa_crypto_main.c | 921 ++++++++++++++++++++-
> 3 files changed, 950 insertions(+), 17 deletions(-)
>
> diff --git a/crypto/testmgr.c b/crypto/testmgr.c
> index 216878c8bc3d..b6d924e0ff59 100644
> --- a/crypto/testmgr.c
> +++ b/crypto/testmgr.c
> @@ -4819,6 +4819,16 @@ static const struct alg_test_desc alg_test_descs[] = {
> .decomp = __VECS(deflate_decomp_tv_template)
> }
> }
> + }, {
> + .alg = "deflate-iaa",
> + .test = alg_test_comp,
> + .fips_allowed = 1,
> + .suite = {
> + .comp = {
> + .comp = __VECS(deflate_comp_tv_template),
> + .decomp = __VECS(deflate_decomp_tv_template)
> + }
> + }
> }, {
> .alg = "dh",
> .test = alg_test_kpp,
> diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h
> index 33e68f9d3d02..4c6b0f5a6b50 100644
> --- a/drivers/crypto/intel/iaa/iaa_crypto.h
> +++ b/drivers/crypto/intel/iaa/iaa_crypto.h
> @@ -10,15 +10,42 @@
>
> #define IDXD_SUBDRIVER_NAME "crypto"
>
> +#define IAA_DECOMP_ENABLE BIT(0)
> +#define IAA_DECOMP_FLUSH_OUTPUT BIT(1)
> +#define IAA_DECOMP_CHECK_FOR_EOB BIT(2)
> +#define IAA_DECOMP_STOP_ON_EOB BIT(3)
> +#define IAA_DECOMP_SUPPRESS_OUTPUT BIT(9)
> +
> +#define IAA_COMP_FLUSH_OUTPUT BIT(1)
> +#define IAA_COMP_APPEND_EOB BIT(2)
> +
> +#define IAA_COMPLETION_TIMEOUT 1000000
> +
> +#define IAA_ANALYTICS_ERROR 0x0a
> +#define IAA_ERROR_DECOMP_BUF_OVERFLOW 0x0b
> +#define IAA_ERROR_COMP_BUF_OVERFLOW 0x19
> +#define IAA_ERROR_WATCHDOG_EXPIRED 0x24
> +
> #define IAA_COMP_MODES_MAX 2
>
> #define FIXED_HDR 0x2
> #define FIXED_HDR_SIZE 3
>
> +#define IAA_COMP_FLAGS (IAA_COMP_FLUSH_OUTPUT | \
> + IAA_COMP_APPEND_EOB)
> +
> +#define IAA_DECOMP_FLAGS (IAA_DECOMP_ENABLE | \
> + IAA_DECOMP_FLUSH_OUTPUT | \
> + IAA_DECOMP_CHECK_FOR_EOB | \
> + IAA_DECOMP_STOP_ON_EOB)
> +
> /* Representation of IAA workqueue */
> struct iaa_wq {
> struct list_head list;
> +
> struct idxd_wq *wq;
> + int ref;
> + bool remove;
>
> struct iaa_device *iaa_device;
> };
> @@ -119,4 +146,13 @@ int add_iaa_compression_mode(const char *name,
>
> void remove_iaa_compression_mode(const char *name);
>
> +enum iaa_mode {
> + IAA_MODE_FIXED,
> +};
> +
> +struct iaa_compression_ctx {
> + enum iaa_mode mode;
> + bool verify_compress;
> +};
> +
> #endif
> diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
> index 0c59332456f0..9b4acc343582 100644
> --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
> +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
> @@ -10,6 +10,7 @@
> #include <uapi/linux/idxd.h>
> #include <linux/highmem.h>
> #include <linux/sched/smt.h>
> +#include <crypto/internal/acompress.h>
>
> #include "idxd.h"
> #include "iaa_crypto.h"
> @@ -32,6 +33,20 @@ static unsigned int cpus_per_iaa;
> /* Per-cpu lookup table for balanced wqs */
> static struct wq_table_entry __percpu *wq_table;
>
> +static struct idxd_wq *wq_table_next_wq(int cpu)
> +{
> + struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu);
> +
> + if (++entry->cur_wq >= entry->n_wqs)
> + entry->cur_wq = 0;
> +
> + pr_debug("%s: returning wq at idx %d (iaa wq %d.%d) from cpu %d\n", __func__,
> + entry->cur_wq, entry->wqs[entry->cur_wq]->idxd->id,
> + entry->wqs[entry->cur_wq]->id, cpu);
> +
> + return entry->wqs[entry->cur_wq];
> +}
> +
> static void wq_table_add(int cpu, struct idxd_wq *wq)
> {
> struct wq_table_entry *entry = per_cpu_ptr(wq_table, cpu);
> @@ -66,6 +81,40 @@ static void wq_table_clear_entry(int cpu)
> static LIST_HEAD(iaa_devices);
> static DEFINE_MUTEX(iaa_devices_lock);
>
> +/* If enabled, IAA hw crypto algos are registered, unavailable otherwise */
> +static bool iaa_crypto_enabled;
> +static bool iaa_crypto_registered;
> +
> +/* Verify results of IAA compress or not */
> +static bool iaa_verify_compress = true;
> +
> +static ssize_t verify_compress_show(struct device_driver *driver, char *buf)
> +{
> + return sprintf(buf, "%d\n", iaa_verify_compress);
> +}
> +
> +static ssize_t verify_compress_store(struct device_driver *driver,
> + const char *buf, size_t count)
> +{
> + int ret = -EBUSY;
> +
> + mutex_lock(&iaa_devices_lock);
> +
> + if (iaa_crypto_enabled)
> + goto out;
> +
> + ret = kstrtobool(buf, &iaa_verify_compress);
> + if (ret)
> + goto out;
> +
> + ret = count;
> +out:
> + mutex_unlock(&iaa_devices_lock);
> +
> + return ret;
> +}
> +static DRIVER_ATTR_RW(verify_compress);
> +
> static struct iaa_compression_mode *iaa_compression_modes[IAA_COMP_MODES_MAX];
>
> static int find_empty_iaa_compression_mode(void)
> @@ -250,6 +299,12 @@ int add_iaa_compression_mode(const char *name,
> }
> EXPORT_SYMBOL_GPL(add_iaa_compression_mode);
>
> +static struct iaa_device_compression_mode *
> +get_iaa_device_compression_mode(struct iaa_device *iaa_device, int idx)
> +{
> + return iaa_device->compression_modes[idx];
> +}
> +
> static void free_device_compression_mode(struct iaa_device *iaa_device,
> struct iaa_device_compression_mode *device_mode)
> {
> @@ -268,6 +323,86 @@ static void free_device_compression_mode(struct iaa_device *iaa_device,
> kfree(device_mode);
> }
>
> +#define IDXD_OP_FLAG_AECS_RW_TGLS 0x400000
> +#define IAX_AECS_DEFAULT_FLAG (IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CC)
> +#define IAX_AECS_COMPRESS_FLAG (IAX_AECS_DEFAULT_FLAG | IDXD_OP_FLAG_RD_SRC2_AECS)
> +#define IAX_AECS_DECOMPRESS_FLAG (IAX_AECS_DEFAULT_FLAG | IDXD_OP_FLAG_RD_SRC2_AECS)
> +#define IAX_AECS_GEN_FLAG (IAX_AECS_DEFAULT_FLAG | \
> + IDXD_OP_FLAG_WR_SRC2_AECS_COMP | \
> + IDXD_OP_FLAG_AECS_RW_TGLS)
> +
> +static int check_completion(struct device *dev,
> + struct iax_completion_record *comp,
> + bool compress,
> + bool only_once);
> +
> +static int decompress_header(struct iaa_device_compression_mode *device_mode,
> + struct iaa_compression_mode *mode,
> + struct idxd_wq *wq)
> +{
> + dma_addr_t src_addr, src2_addr;
> + struct idxd_desc *idxd_desc;
> + struct iax_hw_desc *desc;
> + struct device *dev;
> + int ret = 0;
> +
> + idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK);
> + if (IS_ERR(idxd_desc))
> + return PTR_ERR(idxd_desc);
> +
> + desc = idxd_desc->iax_hw;
> +
> + dev = &wq->idxd->pdev->dev;
> +
> + src_addr = dma_map_single(dev, (void *)mode->header_table,
> + mode->header_table_size, DMA_TO_DEVICE);
> + dev_dbg(dev, "%s: mode->name %s, src_addr %llx, dev %p, src %p, slen %d\n",
> + __func__, mode->name, src_addr, dev,
> + mode->header_table, mode->header_table_size);
> + if (unlikely(dma_mapping_error(dev, src_addr))) {
> + dev_dbg(dev, "dma_map_single err, exiting\n");
> + ret = -ENOMEM;
> + return ret;
> + }
> +
> + desc->flags = IAX_AECS_GEN_FLAG;
> + desc->opcode = IAX_OPCODE_DECOMPRESS;
> +
> + desc->src1_addr = (u64)src_addr;
> + desc->src1_size = mode->header_table_size;
> +
> + src2_addr = device_mode->aecs_decomp_table_dma_addr;
> + desc->src2_addr = (u64)src2_addr;
> + desc->src2_size = 1088;
> + dev_dbg(dev, "%s: mode->name %s, src2_addr %llx, dev %p, src2_size %d\n",
> + __func__, mode->name, desc->src2_addr, dev, desc->src2_size);
> + desc->max_dst_size = 0; // suppressed output
> +
> + desc->decompr_flags = mode->gen_decomp_table_flags;
> +
> + desc->priv = 1;
> +
> + desc->completion_addr = idxd_desc->compl_dma;
> +
> + ret = idxd_submit_desc(wq, idxd_desc);
> + if (ret) {
> + pr_err("%s: submit_desc failed ret=0x%x\n", __func__, ret);
> + goto out;
> + }
> +
> + ret = check_completion(dev, idxd_desc->iax_completion, false, false);
> + if (ret)
> + dev_dbg(dev, "%s: mode->name %s check_completion failed ret=%d\n",
> + __func__, mode->name, ret);
> + else
> + dev_dbg(dev, "%s: mode->name %s succeeded\n", __func__,
> + mode->name);
> +out:
> + dma_unmap_single(dev, src_addr, 1088, DMA_TO_DEVICE);
> +
> + return ret;
> +}
> +
> static int init_device_compression_mode(struct iaa_device *iaa_device,
> struct iaa_compression_mode *mode,
> int idx, struct idxd_wq *wq)
> @@ -300,6 +435,14 @@ static int init_device_compression_mode(struct iaa_device *iaa_device,
> memcpy(device_mode->aecs_comp_table->ll_sym, mode->ll_table, mode->ll_table_size);
> memcpy(device_mode->aecs_comp_table->d_sym, mode->d_table, mode->d_table_size);
>
> + if (mode->header_table) {
> + ret = decompress_header(device_mode, mode, wq);
> + if (ret) {
> + pr_debug("iaa header decompression failed: ret=%d\n", ret);
> + goto free;
> + }
> + }
> +
> if (mode->init) {
> ret = mode->init(device_mode);
> if (ret)
> @@ -372,18 +515,6 @@ static struct iaa_device *iaa_device_alloc(void)
> return iaa_device;
> }
>
> -static void iaa_device_free(struct iaa_device *iaa_device)
> -{
> - struct iaa_wq *iaa_wq, *next;
> -
> - list_for_each_entry_safe(iaa_wq, next, &iaa_device->wqs, list) {
> - list_del(&iaa_wq->list);
> - kfree(iaa_wq);
> - }
> -
> - kfree(iaa_device);
> -}
> -
> static bool iaa_has_wq(struct iaa_device *iaa_device, struct idxd_wq *wq)
> {
> struct iaa_wq *iaa_wq;
> @@ -426,12 +557,8 @@ static int init_iaa_device(struct iaa_device *iaa_device, struct iaa_wq *iaa_wq)
>
> static void del_iaa_device(struct iaa_device *iaa_device)
> {
> - remove_device_compression_modes(iaa_device);
> -
> list_del(&iaa_device->list);
>
> - iaa_device_free(iaa_device);
> -
> nr_iaa--;
> }
>
> @@ -497,6 +624,82 @@ static void clear_wq_table(void)
> pr_debug("cleared wq table\n");
> }
>
> +static void free_iaa_device(struct iaa_device *iaa_device)
> +{
> + if (!iaa_device)
> + return;
> +
> + remove_device_compression_modes(iaa_device);
> + kfree(iaa_device);
> +}
> +
> +static void __free_iaa_wq(struct iaa_wq *iaa_wq)
> +{
> + struct iaa_device *iaa_device;
> +
> + if (!iaa_wq)
> + return;
> +
> + iaa_device = iaa_wq->iaa_device;
> + if (iaa_device->n_wq == 0)
> + free_iaa_device(iaa_wq->iaa_device);
> +}
> +
> +static void free_iaa_wq(struct iaa_wq *iaa_wq)
> +{
> + struct idxd_wq *wq;
> +
> + __free_iaa_wq(iaa_wq);
> +
> + wq = iaa_wq->wq;
> +
> + kfree(iaa_wq);
> + idxd_wq_set_private(wq, NULL);
> +}
> +
> +static int iaa_wq_get(struct idxd_wq *wq)
> +{
> + struct idxd_device *idxd = wq->idxd;
> + struct iaa_wq *iaa_wq;
> + int ret = 0;
> +
> + spin_lock(&idxd->dev_lock);
> + iaa_wq = idxd_wq_get_private(wq);
> + if (iaa_wq && !iaa_wq->remove)
> + iaa_wq->ref++;
> + else
> + ret = -ENODEV;
> + spin_unlock(&idxd->dev_lock);
> +
> + return ret;
> +}
> +
> +static int iaa_wq_put(struct idxd_wq *wq)
> +{
> + struct idxd_device *idxd = wq->idxd;
> + struct iaa_wq *iaa_wq;
> + bool free = false;
> + int ret = 0;
> +
> + spin_lock(&idxd->dev_lock);
> + iaa_wq = idxd_wq_get_private(wq);
> + if (iaa_wq) {
> + iaa_wq->ref--;
> + if (iaa_wq->ref == 0 && iaa_wq->remove) {
> + __free_iaa_wq(iaa_wq);
> + idxd_wq_set_private(wq, NULL);
> + free = true;
> + }
> + } else {
> + ret = -ENODEV;
> + }
> + spin_unlock(&idxd->dev_lock);
> + if (free)
> + kfree(iaa_wq);
> +
> + return ret;
> +}
> +
> static void free_wq_table(void)
> {
> int cpu;
> @@ -580,6 +783,7 @@ static int save_iaa_wq(struct idxd_wq *wq)
> ret = add_iaa_wq(new_device, wq, &new_wq);
> if (ret) {
> del_iaa_device(new_device);
> + free_iaa_device(new_device);
> goto out;
> }
>
> @@ -587,6 +791,7 @@ static int save_iaa_wq(struct idxd_wq *wq)
> if (ret) {
> del_iaa_wq(new_device, new_wq->wq);
> del_iaa_device(new_device);
> + free_iaa_wq(new_wq);
> goto out;
> }
> }
> @@ -724,6 +929,624 @@ static void rebalance_wq_table(void)
> }
> }
>
> +static inline int check_completion(struct device *dev,
> + struct iax_completion_record *comp,
> + bool compress,
> + bool only_once)
> +{
> + char *op_str = compress ? "compress" : "decompress";
> + int ret = 0;
> +
> + while (!comp->status) {
> + if (only_once)
> + return -EAGAIN;
> + cpu_relax();
> + }
> +
> + if (comp->status != IAX_COMP_SUCCESS) {
> + if (comp->status == IAA_ERROR_WATCHDOG_EXPIRED) {
> + ret = -ETIMEDOUT;
> + dev_dbg(dev, "%s timed out, size=0x%x\n",
> + op_str, comp->output_size);
> + goto out;
> + }
> +
> + if (comp->status == IAA_ANALYTICS_ERROR &&
> + comp->error_code == IAA_ERROR_COMP_BUF_OVERFLOW && compress) {
> + ret = -E2BIG;
> + dev_dbg(dev, "compressed > uncompressed size,"
> + " not compressing, size=0x%x\n",
> + comp->output_size);
> + goto out;
> + }
> +
> + if (comp->status == IAA_ERROR_DECOMP_BUF_OVERFLOW) {
> + ret = -EOVERFLOW;
> + goto out;
> + }
> +
> + ret = -EINVAL;
> + dev_dbg(dev, "iaa %s status=0x%x, error=0x%x, size=0x%x\n",
> + op_str, comp->status, comp->error_code, comp->output_size);
> + print_hex_dump(KERN_INFO, "cmp-rec: ", DUMP_PREFIX_OFFSET, 8, 1, comp, 64, 0);
> +
> + goto out;
> + }
> +out:
> + return ret;
> +}
> +
> +static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
> + struct idxd_wq *wq,
> + dma_addr_t src_addr, unsigned int slen,
> + dma_addr_t dst_addr, unsigned int *dlen,
> + u32 *compression_crc,
> + bool disable_async)
> +{
> + struct iaa_device_compression_mode *active_compression_mode;
> + struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct iaa_device *iaa_device;
> + struct idxd_desc *idxd_desc;
> + struct iax_hw_desc *desc;
> + struct idxd_device *idxd;
> + struct iaa_wq *iaa_wq;
> + struct pci_dev *pdev;
> + struct device *dev;
> + int ret = 0;
> +
> + iaa_wq = idxd_wq_get_private(wq);
> + iaa_device = iaa_wq->iaa_device;
> + idxd = iaa_device->idxd;
> + pdev = idxd->pdev;
> + dev = &pdev->dev;
> +
> + active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode);
> +
> + idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK);
> + if (IS_ERR(idxd_desc)) {
> + dev_dbg(dev, "idxd descriptor allocation failed\n");
> + dev_dbg(dev, "iaa compress failed: ret=%ld\n", PTR_ERR(idxd_desc));
> + return PTR_ERR(idxd_desc);
> + }
> + desc = idxd_desc->iax_hw;
> +
> + desc->flags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR |
> + IDXD_OP_FLAG_RD_SRC2_AECS | IDXD_OP_FLAG_CC;
> + desc->opcode = IAX_OPCODE_COMPRESS;
> + desc->compr_flags = IAA_COMP_FLAGS;
> + desc->priv = 1;
> +
> + desc->src1_addr = (u64)src_addr;
> + desc->src1_size = slen;
> + desc->dst_addr = (u64)dst_addr;
> + desc->max_dst_size = *dlen;
> + desc->src2_addr = active_compression_mode->aecs_comp_table_dma_addr;
> + desc->src2_size = sizeof(struct aecs_comp_table_record);
> + desc->completion_addr = idxd_desc->compl_dma;
> +
> + dev_dbg(dev, "%s: compression mode %s,"
> + " desc->src1_addr %llx, desc->src1_size %d,"
> + " desc->dst_addr %llx, desc->max_dst_size %d,"
> + " desc->src2_addr %llx, desc->src2_size %d\n", __func__,
> + active_compression_mode->name,
> + desc->src1_addr, desc->src1_size, desc->dst_addr,
> + desc->max_dst_size, desc->src2_addr, desc->src2_size);
> +
> + ret = idxd_submit_desc(wq, idxd_desc);
> + if (ret) {
> + dev_dbg(dev, "submit_desc failed ret=%d\n", ret);
> + goto err;
> + }
> +
> + ret = check_completion(dev, idxd_desc->iax_completion, true, false);
> + if (ret) {
> + dev_dbg(dev, "check_completion failed ret=%d\n", ret);
> + goto err;
> + }
> +
> + *dlen = idxd_desc->iax_completion->output_size;
> +
> + *compression_crc = idxd_desc->iax_completion->crc;
> +
> + idxd_free_desc(wq, idxd_desc);
> +out:
> + return ret;
> +err:
> + idxd_free_desc(wq, idxd_desc);
> + dev_dbg(dev, "iaa compress failed: ret=%d\n", ret);
> +
> + goto out;
> +}
> +
> +static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req,
> + struct idxd_wq *wq,
> + dma_addr_t src_addr, unsigned int slen,
> + dma_addr_t dst_addr, unsigned int *dlen,
> + u32 compression_crc)
> +{
> + struct iaa_device_compression_mode *active_compression_mode;
> + struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct iaa_device *iaa_device;
> + struct idxd_desc *idxd_desc;
> + struct iax_hw_desc *desc;
> + struct idxd_device *idxd;
> + struct iaa_wq *iaa_wq;
> + struct pci_dev *pdev;
> + struct device *dev;
> + int ret = 0;
> +
> + iaa_wq = idxd_wq_get_private(wq);
> + iaa_device = iaa_wq->iaa_device;
> + idxd = iaa_device->idxd;
> + pdev = idxd->pdev;
> + dev = &pdev->dev;
> +
> + active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode);
> +
> + idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK);
> + if (IS_ERR(idxd_desc)) {
> + dev_dbg(dev, "idxd descriptor allocation failed\n");
> + dev_dbg(dev, "iaa compress failed: ret=%ld\n",
> + PTR_ERR(idxd_desc));
> + return PTR_ERR(idxd_desc);
> + }
> + desc = idxd_desc->iax_hw;
> +
> + /* Verify (optional) - decompress and check crc, suppress dest write */
> +
> + desc->flags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CC;
> + desc->opcode = IAX_OPCODE_DECOMPRESS;
> + desc->decompr_flags = IAA_DECOMP_FLAGS | IAA_DECOMP_SUPPRESS_OUTPUT;
> + desc->priv = 1;
> +
> + desc->src1_addr = (u64)dst_addr;
> + desc->src1_size = *dlen;
> + desc->dst_addr = (u64)src_addr;
> + desc->max_dst_size = slen;
> + desc->completion_addr = idxd_desc->compl_dma;
> +
> + dev_dbg(dev, "(verify) compression mode %s,"
> + " desc->src1_addr %llx, desc->src1_size %d,"
> + " desc->dst_addr %llx, desc->max_dst_size %d,"
> + " desc->src2_addr %llx, desc->src2_size %d\n",
> + active_compression_mode->name,
> + desc->src1_addr, desc->src1_size, desc->dst_addr,
> + desc->max_dst_size, desc->src2_addr, desc->src2_size);
> +
> + ret = idxd_submit_desc(wq, idxd_desc);
> + if (ret) {
> + dev_dbg(dev, "submit_desc (verify) failed ret=%d\n", ret);
> + goto err;
> + }
> +
> + ret = check_completion(dev, idxd_desc->iax_completion, false, false);
> + if (ret) {
> + dev_dbg(dev, "(verify) check_completion failed ret=%d\n", ret);
> + goto err;
> + }
> +
> + if (compression_crc != idxd_desc->iax_completion->crc) {
> + ret = -EINVAL;
> + dev_dbg(dev, "(verify) iaa comp/decomp crc mismatch:"
> + " comp=0x%x, decomp=0x%x\n", compression_crc,
> + idxd_desc->iax_completion->crc);
> + print_hex_dump(KERN_INFO, "cmp-rec: ", DUMP_PREFIX_OFFSET,
> + 8, 1, idxd_desc->iax_completion, 64, 0);
> + goto err;
> + }
> +
> + idxd_free_desc(wq, idxd_desc);
> +out:
> + return ret;
> +err:
> + idxd_free_desc(wq, idxd_desc);
> + dev_dbg(dev, "iaa compress failed: ret=%d\n", ret);
> +
> + goto out;
> +}
> +
> +static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,
> + struct idxd_wq *wq,
> + dma_addr_t src_addr, unsigned int slen,
> + dma_addr_t dst_addr, unsigned int *dlen,
> + bool disable_async)
> +{
> + struct iaa_device_compression_mode *active_compression_mode;
> + struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm);
> + struct iaa_device *iaa_device;
> + struct idxd_desc *idxd_desc;
> + struct iax_hw_desc *desc;
> + struct idxd_device *idxd;
> + struct iaa_wq *iaa_wq;
> + struct pci_dev *pdev;
> + struct device *dev;
> + int ret = 0;
> +
> + iaa_wq = idxd_wq_get_private(wq);
> + iaa_device = iaa_wq->iaa_device;
> + idxd = iaa_device->idxd;
> + pdev = idxd->pdev;
> + dev = &pdev->dev;
> +
> + active_compression_mode = get_iaa_device_compression_mode(iaa_device, ctx->mode);
> +
> + idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK);
> + if (IS_ERR(idxd_desc)) {
> + dev_dbg(dev, "idxd descriptor allocation failed\n");
> + dev_dbg(dev, "iaa decompress failed: ret=%ld\n",
> + PTR_ERR(idxd_desc));
> + return PTR_ERR(idxd_desc);
> + }
> + desc = idxd_desc->iax_hw;
> +
> + desc->flags = IDXD_OP_FLAG_CRAV | IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CC;
> + desc->opcode = IAX_OPCODE_DECOMPRESS;
> + desc->max_dst_size = PAGE_SIZE;
> + desc->decompr_flags = IAA_DECOMP_FLAGS;
> + desc->priv = 1;
> +
> + desc->src1_addr = (u64)src_addr;
> + desc->dst_addr = (u64)dst_addr;
> + desc->max_dst_size = *dlen;
> + desc->src1_size = slen;
> + desc->completion_addr = idxd_desc->compl_dma;
> +
> + dev_dbg(dev, "%s: decompression mode %s,"
> + " desc->src1_addr %llx, desc->src1_size %d,"
> + " desc->dst_addr %llx, desc->max_dst_size %d,"
> + " desc->src2_addr %llx, desc->src2_size %d\n", __func__,
> + active_compression_mode->name,
> + desc->src1_addr, desc->src1_size, desc->dst_addr,
> + desc->max_dst_size, desc->src2_addr, desc->src2_size);
> +
> + ret = idxd_submit_desc(wq, idxd_desc);
> + if (ret) {
> + dev_dbg(dev, "submit_desc failed ret=%d\n", ret);
> + goto err;
> + }
> +
> + ret = check_completion(dev, idxd_desc->iax_completion, false, false);
> + if (ret) {
> + dev_dbg(dev, "check_completion failed ret=%d\n", ret);
> + goto err;
> + }
> +
> + *dlen = idxd_desc->iax_completion->output_size;
> +
> + idxd_free_desc(wq, idxd_desc);
> +out:
> + return ret;
> +err:
> + idxd_free_desc(wq, idxd_desc);
> + dev_dbg(dev, "iaa decompress failed: ret=%d\n", ret);
> +
> + goto out;
> +}
> +
> +static int iaa_comp_acompress(struct acomp_req *req)
> +{
> + struct iaa_compression_ctx *compression_ctx;
> + struct crypto_tfm *tfm = req->base.tfm;
> + dma_addr_t src_addr, dst_addr;
> + int nr_sgs, cpu, ret = 0;
> + struct iaa_wq *iaa_wq;
> + u32 compression_crc;
> + struct idxd_wq *wq;
> + struct device *dev;
> +
> + compression_ctx = crypto_tfm_ctx(tfm);
> +
> + if (!iaa_crypto_enabled) {
> + pr_debug("iaa_crypto disabled, not compressing\n");
> + return -ENODEV;
> + }
> +
> + if (!req->src || !req->slen) {
> + pr_debug("invalid src, not compressing\n");
> + return -EINVAL;
> + }
> +
> + cpu = get_cpu();
> + wq = wq_table_next_wq(cpu);
> + put_cpu();
> + if (!wq) {
> + pr_debug("no wq configured for cpu=%d\n", cpu);
> + return -ENODEV;
> + }
> +
> + ret = iaa_wq_get(wq);
> + if (ret) {
> + pr_debug("no wq available for cpu=%d\n", cpu);
> + return -ENODEV;
> + }
> +
> + iaa_wq = idxd_wq_get_private(wq);
> +
> + if (!req->dst) {
> + gfp_t flags = req->flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
> + /* incompressible data will always be < 2 * slen */
> + req->dlen = 2 * req->slen;
2 * req->slen is an estimated size for dst buf. When slen is greater
than 2048 bytes, dlen is greater than 4096 bytes.
> + req->dst = sgl_alloc(req->dlen, flags, NULL);
> + if (!req->dst) {
> + ret = -ENOMEM;
> + goto out;
> + }
> + }
> +
> + dev = &wq->idxd->pdev->dev;
> +
> + nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
> + if (nr_sgs <= 0 || nr_sgs > 1) {
> + dev_dbg(dev, "couldn't map src sg for iaa device %d,"
> + " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
> + iaa_wq->wq->id, ret);
> + ret = -EIO;
> + goto out;
> + }
> + src_addr = sg_dma_address(req->src);
> + dev_dbg(dev, "dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p,"
> + " req->slen %d, sg_dma_len(sg) %d\n", src_addr, nr_sgs,
> + req->src, req->slen, sg_dma_len(req->src));
> +
> + nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
> + if (nr_sgs <= 0 || nr_sgs > 1) {
when dlen is greater than 4096 bytes, nr_sgs maybe greater than 1,
but the actual output size maybe less than 4096 bytes.
In other words, the condition nr_sgs > 1 may block a case which could
have been done.
> + dev_dbg(dev, "couldn't map dst sg for iaa device %d,"
> + " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
> + iaa_wq->wq->id, ret);
> + ret = -EIO;
> + goto err_map_dst;
> + }
> + dst_addr = sg_dma_address(req->dst);
> + dev_dbg(dev, "dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p,"
> + " req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs,
> + req->dst, req->dlen, sg_dma_len(req->dst));
> +
> + ret = iaa_compress(tfm, req, wq, src_addr, req->slen, dst_addr,
> + &req->dlen, &compression_crc, false);
> + if (ret == -EINPROGRESS)
> + return ret;
> +
> + if (!ret && compression_ctx->verify_compress) {
> + dma_sync_sg_for_device(dev, req->dst, 1, DMA_FROM_DEVICE);
> + dma_sync_sg_for_device(dev, req->src, 1, DMA_TO_DEVICE);
> + ret = iaa_compress_verify(tfm, req, wq, src_addr, req->slen,
> + dst_addr, &req->dlen, compression_crc);
> + }
> +
> + if (ret)
> + dev_dbg(dev, "asynchronous compress failed ret=%d\n", ret);
> +
> + dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
> +err_map_dst:
> + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
> +out:
> + iaa_wq_put(wq);
> +
> + return ret;
> +}
> +
> +static int iaa_comp_adecompress_alloc_dest(struct acomp_req *req)
> +{
> + gfp_t flags = req->flags & CRYPTO_TFM_REQ_MAY_SLEEP ?
> + GFP_KERNEL : GFP_ATOMIC;
> + struct crypto_tfm *tfm = req->base.tfm;
> + dma_addr_t src_addr, dst_addr;
> + int nr_sgs, cpu, ret = 0;
> + struct iaa_wq *iaa_wq;
> + struct device *dev;
> + struct idxd_wq *wq;
> +
> + cpu = get_cpu();
> + wq = wq_table_next_wq(cpu);
> + put_cpu();
> + if (!wq) {
> + pr_debug("no wq configured for cpu=%d\n", cpu);
> + return -ENODEV;
> + }
> +
> + ret = iaa_wq_get(wq);
> + if (ret) {
> + pr_debug("no wq available for cpu=%d\n", cpu);
> + return -ENODEV;
> + }
> +
> + iaa_wq = idxd_wq_get_private(wq);
> +
> + dev = &wq->idxd->pdev->dev;
> +
> + nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
> + if (nr_sgs <= 0 || nr_sgs > 1) {
> + dev_dbg(dev, "couldn't map src sg for iaa device %d,"
> + " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
> + iaa_wq->wq->id, ret);
> + ret = -EIO;
> + goto out;
> + }
> + src_addr = sg_dma_address(req->src);
> + dev_dbg(dev, "dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p,"
> + " req->slen %d, sg_dma_len(sg) %d\n", src_addr, nr_sgs,
> + req->src, req->slen, sg_dma_len(req->src));
> +
> + req->dlen = 4 * req->slen; /* start with ~avg comp rato */
4 * req->slen is an estimated size for dst buf. When slen is greater
than 1024 bytes, dlen is greater than 4096 bytes.
> +alloc_dest:
> + req->dst = sgl_alloc(req->dlen, flags, NULL);
> + if (!req->dst) {
> + ret = -ENOMEM;
> + goto out;
> + }
> +
> + nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
> + if (nr_sgs <= 0 || nr_sgs > 1) {
When dlen is greater than 4096 bytes, nr_sgs maybe greater than 1,
it may cause the src data compressed by iaa crypto can't be
decompressed by iaa crypto.
> + dev_dbg(dev, "couldn't map dst sg for iaa device %d,"
> + " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
> + iaa_wq->wq->id, ret);
> + ret = -EIO;
> + goto err_map_dst;
> + }
> +
> + dst_addr = sg_dma_address(req->dst);
> + dev_dbg(dev, "dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p,"
> + " req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs,
> + req->dst, req->dlen, sg_dma_len(req->dst));
> + ret = iaa_decompress(tfm, req, wq, src_addr, req->slen,
> + dst_addr, &req->dlen, true);
> + if (ret == -EOVERFLOW) {
> + dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
> + sgl_free(req->dst);
> + req->dlen *= 2;
> + if (req->dlen > CRYPTO_ACOMP_DST_MAX)
> + goto err_map_dst;
> + goto alloc_dest;
> + }
> +
> + if (ret != 0)
> + dev_dbg(dev, "asynchronous decompress failed ret=%d\n", ret);
> +
> + dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
> +err_map_dst:
> + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
> +out:
> + iaa_wq_put(wq);
> +
> + return ret;
> +}
> +
> +static int iaa_comp_adecompress(struct acomp_req *req)
> +{
> + struct crypto_tfm *tfm = req->base.tfm;
> + dma_addr_t src_addr, dst_addr;
> + int nr_sgs, cpu, ret = 0;
> + struct iaa_wq *iaa_wq;
> + struct device *dev;
> + struct idxd_wq *wq;
> +
> + if (!iaa_crypto_enabled) {
> + pr_debug("iaa_crypto disabled, not decompressing\n");
> + return -ENODEV;
> + }
> +
> + if (!req->src || !req->slen) {
> + pr_debug("invalid src, not decompressing\n");
> + return -EINVAL;
> + }
> +
> + if (!req->dst)
> + return iaa_comp_adecompress_alloc_dest(req);
> +
> + cpu = get_cpu();
> + wq = wq_table_next_wq(cpu);
> + put_cpu();
> + if (!wq) {
> + pr_debug("no wq configured for cpu=%d\n", cpu);
> + return -ENODEV;
> + }
> +
> + ret = iaa_wq_get(wq);
> + if (ret) {
> + pr_debug("no wq available for cpu=%d\n", cpu);
> + return -ENODEV;
> + }
> +
> + iaa_wq = idxd_wq_get_private(wq);
> +
> + dev = &wq->idxd->pdev->dev;
> +
> + nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
> + if (nr_sgs <= 0 || nr_sgs > 1) {
> + dev_dbg(dev, "couldn't map src sg for iaa device %d,"
> + " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
> + iaa_wq->wq->id, ret);
> + ret = -EIO;
> + goto out;
> + }
> + src_addr = sg_dma_address(req->src);
> + dev_dbg(dev, "dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p,"
> + " req->slen %d, sg_dma_len(sg) %d\n", src_addr, nr_sgs,
> + req->src, req->slen, sg_dma_len(req->src));
> +
> + nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
> + if (nr_sgs <= 0 || nr_sgs > 1) {
> + dev_dbg(dev, "couldn't map dst sg for iaa device %d,"
> + " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
> + iaa_wq->wq->id, ret);
> + ret = -EIO;
> + goto err_map_dst;
> + }
> + dst_addr = sg_dma_address(req->dst);
> + dev_dbg(dev, "dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p,"
> + " req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs,
> + req->dst, req->dlen, sg_dma_len(req->dst));
> +
> + ret = iaa_decompress(tfm, req, wq, src_addr, req->slen,
> + dst_addr, &req->dlen, false);
> + if (ret == -EINPROGRESS)
> + return ret;
> +
> + if (ret != 0)
> + dev_dbg(dev, "asynchronous decompress failed ret=%d\n", ret);
> +
> + dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
> +err_map_dst:
> + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
> +out:
> + iaa_wq_put(wq);
> +
> + return ret;
> +}
> +
> +static void compression_ctx_init(struct iaa_compression_ctx *ctx)
> +{
> + ctx->verify_compress = iaa_verify_compress;
> +}
> +
> +static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm)
> +{
> + struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm);
> + struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm);
> +
> + compression_ctx_init(ctx);
> +
> + ctx->mode = IAA_MODE_FIXED;
> +
> + return 0;
> +}
> +
> +static struct acomp_alg iaa_acomp_fixed_deflate = {
> + .init = iaa_comp_init_fixed,
> + .compress = iaa_comp_acompress,
> + .decompress = iaa_comp_adecompress,
> + .dst_free = sgl_free,
> + .base = {
> + .cra_name = "deflate-iaa",
> + .cra_driver_name = "deflate_iaa",
> + .cra_ctxsize = sizeof(struct iaa_compression_ctx),
> + .cra_module = THIS_MODULE,
> + }
> +};
> +
> +static int iaa_register_compression_device(void)
> +{
> + int ret;
> +
> + ret = crypto_register_acomp(&iaa_acomp_fixed_deflate);
> + if (ret) {
> + pr_err("deflate algorithm acomp fixed registration failed (%d)\n", ret);
> + goto out;
> + }
> +
> + iaa_crypto_registered = true;
> +out:
> + return ret;
> +}
> +
> +static int iaa_unregister_compression_device(void)
> +{
> + if (iaa_crypto_registered)
> + crypto_unregister_acomp(&iaa_acomp_fixed_deflate);
> +
> + return 0;
> +}
> +
> static int iaa_crypto_probe(struct idxd_dev *idxd_dev)
> {
> struct idxd_wq *wq = idxd_dev_to_wq(idxd_dev);
> @@ -741,6 +1564,11 @@ static int iaa_crypto_probe(struct idxd_dev *idxd_dev)
>
> mutex_lock(&wq->wq_lock);
>
> + if (idxd_wq_get_private(wq)) {
> + mutex_unlock(&wq->wq_lock);
> + return -EBUSY;
> + }
> +
> if (!idxd_wq_driver_name_match(wq, dev)) {
> dev_dbg(dev, "wq %d.%d driver_name match failed: wq driver_name %s, dev driver name %s\n",
> idxd->id, wq->id, wq->driver_name, dev->driver->name);
> @@ -774,12 +1602,28 @@ static int iaa_crypto_probe(struct idxd_dev *idxd_dev)
>
> rebalance_wq_table();
>
> + if (first_wq) {
> + iaa_crypto_enabled = true;
> + ret = iaa_register_compression_device();
> + if (ret != 0) {
> + iaa_crypto_enabled = false;
> + dev_dbg(dev, "IAA compression device registration failed\n");
> + goto err_register;
> + }
> + try_module_get(THIS_MODULE);
> +
> + pr_info("iaa_crypto now ENABLED\n");
> + }
> +
> mutex_unlock(&iaa_devices_lock);
> out:
> mutex_unlock(&wq->wq_lock);
>
> return ret;
>
> +err_register:
> + remove_iaa_wq(wq);
> + free_iaa_wq(idxd_wq_get_private(wq));
> err_save:
> if (first_wq)
> free_wq_table();
> @@ -795,6 +1639,9 @@ static int iaa_crypto_probe(struct idxd_dev *idxd_dev)
> static void iaa_crypto_remove(struct idxd_dev *idxd_dev)
> {
> struct idxd_wq *wq = idxd_dev_to_wq(idxd_dev);
> + struct idxd_device *idxd = wq->idxd;
> + struct iaa_wq *iaa_wq;
> + bool free = false;
>
> idxd_wq_quiesce(wq);
>
> @@ -802,11 +1649,37 @@ static void iaa_crypto_remove(struct idxd_dev *idxd_dev)
> mutex_lock(&iaa_devices_lock);
>
> remove_iaa_wq(wq);
> +
> + spin_lock(&idxd->dev_lock);
> + iaa_wq = idxd_wq_get_private(wq);
> + if (!iaa_wq) {
> + pr_err("%s: no iaa_wq available to remove\n", __func__);
> + return;
> + }
> +
> + if (iaa_wq->ref) {
> + iaa_wq->remove = true;
> + } else {
> + wq = iaa_wq->wq;
> + __free_iaa_wq(iaa_wq);
> + idxd_wq_set_private(wq, NULL);
> + free = true;
> + }
> + spin_unlock(&idxd->dev_lock);
> +
> + if (free)
> + kfree(iaa_wq);
> +
> drv_disable_wq(wq);
> rebalance_wq_table();
>
> - if (nr_iaa == 0)
> + if (nr_iaa == 0) {
> + iaa_crypto_enabled = false;
> free_wq_table();
> + module_put(THIS_MODULE);
> +
> + pr_info("iaa_crypto now DISABLED\n");
> + }
>
> mutex_unlock(&iaa_devices_lock);
> mutex_unlock(&wq->wq_lock);
> @@ -844,10 +1717,19 @@ static int __init iaa_crypto_init_module(void)
> goto err_driver_reg;
> }
>
> + ret = driver_create_file(&iaa_crypto_driver.drv,
> + &driver_attr_verify_compress);
> + if (ret) {
> + pr_debug("IAA verify_compress attr creation failed\n");
> + goto err_verify_attr_create;
> + }
> +
> pr_debug("initialized\n");
> out:
> return ret;
>
> +err_verify_attr_create:
> + idxd_driver_unregister(&iaa_crypto_driver);
> err_driver_reg:
> iaa_aecs_cleanup_fixed();
>
> @@ -856,6 +1738,11 @@ static int __init iaa_crypto_init_module(void)
>
> static void __exit iaa_crypto_cleanup_module(void)
> {
> + if (iaa_unregister_compression_device())
> + pr_debug("IAA compression device unregister failed\n");
> +
> + driver_remove_file(&iaa_crypto_driver.drv,
> + &driver_attr_verify_compress);
> idxd_driver_unregister(&iaa_crypto_driver);
> iaa_aecs_cleanup_fixed();
>
> --
> 2.34.1
>

Thanks.
Rex Zhang

2023-07-17 21:58:08

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v7 12/14] crypto: iaa - Add support for deflate-iaa compression algorithm

Hi Rex,

On Mon, 2023-07-17 at 10:12 +0800, Rex Zhang wrote:
> Hi, Tom,
>

[snip]

> > +
> > +static int iaa_comp_acompress(struct acomp_req *req)
> > +{
> > +       struct iaa_compression_ctx *compression_ctx;
> > +       struct crypto_tfm *tfm = req->base.tfm;
> > +       dma_addr_t src_addr, dst_addr;
> > +       int nr_sgs, cpu, ret = 0;
> > +       struct iaa_wq *iaa_wq;
> > +       u32 compression_crc;
> > +       struct idxd_wq *wq;
> > +       struct device *dev;
> > +
> > +       compression_ctx = crypto_tfm_ctx(tfm);
> > +
> > +       if (!iaa_crypto_enabled) {
> > +               pr_debug("iaa_crypto disabled, not compressing\n");
> > +               return -ENODEV;
> > +       }
> > +
> > +       if (!req->src || !req->slen) {
> > +               pr_debug("invalid src, not compressing\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       cpu = get_cpu();
> > +       wq = wq_table_next_wq(cpu);
> > +       put_cpu();
> > +       if (!wq) {
> > +               pr_debug("no wq configured for cpu=%d\n", cpu);
> > +               return -ENODEV;
> > +       }
> > +
> > +       ret = iaa_wq_get(wq);
> > +       if (ret) {
> > +               pr_debug("no wq available for cpu=%d\n", cpu);
> > +               return -ENODEV;
> > +       }
> > +
> > +       iaa_wq = idxd_wq_get_private(wq);
> > +
> > +       if (!req->dst) {
> > +               gfp_t flags = req->flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
> > +               /* incompressible data will always be < 2 * slen */
> > +               req->dlen = 2 * req->slen;
> 2 * req->slen is an estimated size for dst buf. When slen is greater
> than 2048 bytes, dlen is greater than 4096 bytes.

Right, so you're saying that because sgl_alloc uses order 0, this could
result in nr_sgs > 1. Could also just change this to sg_init_one like
all the other callers.

> > +               req->dst = sgl_alloc(req->dlen, flags, NULL);
> > +               if (!req->dst) {
> > +                       ret = -ENOMEM;
> > +                       goto out;
> > +               }
> > +       }
> > +
> > +       dev = &wq->idxd->pdev->dev;
> > +
> > +       nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
> > +       if (nr_sgs <= 0 || nr_sgs > 1) {
> > +               dev_dbg(dev, "couldn't map src sg for iaa device %d,"
> > +                       " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
> > +                       iaa_wq->wq->id, ret);
> > +               ret = -EIO;
> > +               goto out;
> > +       }
> > +       src_addr = sg_dma_address(req->src);
> > +       dev_dbg(dev, "dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p,"
> > +               " req->slen %d, sg_dma_len(sg) %d\n", src_addr, nr_sgs,
> > +               req->src, req->slen, sg_dma_len(req->src));
> > +
> > +       nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
> > +       if (nr_sgs <= 0 || nr_sgs > 1) {
> when dlen is greater than 4096 bytes, nr_sgs maybe greater than 1,
> but the actual output size maybe less than 4096 bytes.
> In other words, the condition nr_sgs > 1 may block a case which could
> have been done.

Currently all existing callers use sg_init_one(), so nr_sgs is never >
1. But yes, we should add code to be able to handle > 1, I agree.

Thanks,

Tom


2023-07-22 01:27:08

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v7 12/14] crypto: iaa - Add support for deflate-iaa compression algorithm

On Mon, Jul 10, 2023 at 02:06:52PM -0500, Tom Zanussi wrote:
>
> Because the IAA hardware has a 4k history-window limitation, only
> buffers <= 4k, or that have been compressed using a <= 4k history
> window, are technically compliant with the deflate spec, which allows
> for a window of up to 32k. Because of this limitation, the IAA fixed
> mode deflate algorithm is given its own algorithm name, 'deflate-iaa'.

So compressed results produced by this can always be decompressed
by the generic algorithm, right?

If it's only when you decompress that you may encounter failures,
then I suggest that we still use the same algorithm name, but fall
back at run-time if the result cannot be decompressed by the
hardware. Is it possible to fail gracefully and then retry the
decompression in this case?

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-07-22 17:32:26

by Tom Zanussi

[permalink] [raw]
Subject: Re: [PATCH v7 12/14] crypto: iaa - Add support for deflate-iaa compression algorithm

On Sat, 2023-07-22 at 13:23 +1200, Herbert Xu wrote:
> On Mon, Jul 10, 2023 at 02:06:52PM -0500, Tom Zanussi wrote:
> >
> > Because the IAA hardware has a 4k history-window limitation, only
> > buffers <= 4k, or that have been compressed using a <= 4k history
> > window, are technically compliant with the deflate spec, which
> > allows
> > for a window of up to 32k.  Because of this limitation, the IAA
> > fixed
> > mode deflate algorithm is given its own algorithm name, 'deflate-
> > iaa'.
>
> So compressed results produced by this can always be decompressed
> by the generic algorithm, right?
>

Right.

> If it's only when you decompress that you may encounter failures,
> then I suggest that we still use the same algorithm name, but fall
> back at run-time if the result cannot be decompressed by the
> hardware.  Is it possible to fail gracefully and then retry the
> decompression in this case?
>

Yeah, I think that should be possible. I'll try it out and add it to
the next version. Thanks for the suggestion!

Tom


> Thanks,