2023-07-18 13:04:06

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 00/21] crypto: consolidate and clean up compression APIs

This series is presented as an RFC, because I haven't quite convinced
myself that the acomp API really needs both scatterlists and request
objects to encapsulate the in- and output buffers, and perhaps there are
more drastic simplifications that we might consider.

However, the current situation with comp, scomp and acomp APIs is
definitely something that needs cleaning up, and so I implemented this
series under the working assumption that we will keep the current acomp
semantics wrt scatterlists and request objects.

Patch #1 drops zlib-deflate support in software, along with the test
cases we have for it. This has no users and should have never been
added.

Patch #2 removes the support for on-the-fly allocation of destination
buffers and scatterlists from the Intel QAT driver. This is never used,
and not even implemented by all drivers (the HiSilicon ZIP driver does
not support it). The diffstat of this patch makes a good case why the
caller should be in charge of allocating the memory, not the driver.

Patch #3 removes this on-the-fly allocation from the core acomp API.

Patch #4 does a minimal conversion of IPcomp to the acomp API.

Patch #5 and #6 are independent UBIFS fixes for things I ran into while
working on patch #7.

Patch #7 converts UBIFS to the acomp API.

Patch #8 converts the zram block driver to the acomp API.

Patches #9 to #19 remove the existing 'comp' API implementations as well
as the core plumbing, now that all clients of the API have been
converted. (Note that pstore stopped using the 'comp' API as well, but
these changes are already queued elsewhere)

Patch #20 converts the generic deflate compression driver to the acomp
API, so that it can natively operate on discontiguous buffers, rather
than requiring scratch buffers. This is the only IPcomp compression
algorithm we actually implement in software in the kernel, and this
conversion could help IPcomp if we decide to convert it further, and
remove the code that 'linearizes' SKBs in order to present them to the
compression API as a contiguous range.

Patch #21 converts the acomp-to-scomp adaptation layer so it no longer
requires per-CPU scratch buffers. This takes advantage of the fact that
all existing users of the acomp API pass contiguous memory regions, and
so scratch buffers are only needed in exceptional cases, and can be
allocated and deallocated on the fly. This removes the need for
preallocated per-CPU scratch buffers that can easily add up to tens of
megabytes on modern systems with high core counts and SMT.

These changes have been build tested and only lightly runtime tested. In
particular, I haven't performed any thorough testing on the acomp
conversions of IPcomp, UBIFS and ZRAM. Any hints on which respective
methods and test cases to use here are highly appreciated.

Cc: Herbert Xu <[email protected]>
Cc: Eric Biggers <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Haren Myneni <[email protected]>
Cc: Nick Terrell <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Sergey Senozhatsky <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Giovanni Cabiddu <[email protected]>
Cc: Richard Weinberger <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: Jakub Kicinski <[email protected]>
Cc: Paolo Abeni <[email protected]>
Cc: Steffen Klassert <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]

Ard Biesheuvel (21):
crypto: scomp - Revert "add support for deflate rfc1950 (zlib)"
crypto: qat - Drop support for allocating destination buffers
crypto: acompress - Drop destination scatterlist allocation feature
net: ipcomp: Migrate to acomp API from deprecated comp API
ubifs: Pass worst-case buffer size to compression routines
ubifs: Avoid allocating buffer space unnecessarily
ubifs: Migrate to acomp compression API
zram: Migrate to acomp compression API
crypto: nx - Migrate to scomp API
crypto: 842 - drop obsolete 'comp' implementation
crypto: deflate - drop obsolete 'comp' implementation
crypto: lz4 - drop obsolete 'comp' implementation
crypto: lz4hc - drop obsolete 'comp' implementation
crypto: lzo-rle - drop obsolete 'comp' implementation
crypto: lzo - drop obsolete 'comp' implementation
crypto: zstd - drop obsolete 'comp' implementation
crypto: cavium/zip - drop obsolete 'comp' implementation
crypto: compress_null - drop obsolete 'comp' implementation
crypto: remove obsolete 'comp' compression API
crypto: deflate - implement acomp API directly
crypto: scompress - Drop the use of per-cpu scratch buffers

Documentation/crypto/architecture.rst | 2 -
crypto/842.c | 63 +---
crypto/Makefile | 2 +-
crypto/acompress.c | 6 -
crypto/api.c | 4 -
crypto/compress.c | 32 --
crypto/crypto_null.c | 31 +-
crypto/crypto_user_base.c | 16 -
crypto/crypto_user_stat.c | 4 -
crypto/deflate.c | 386 ++++++--------------
crypto/lz4.c | 61 +---
crypto/lz4hc.c | 63 +---
crypto/lzo-rle.c | 60 +--
crypto/lzo.c | 60 +--
crypto/proc.c | 3 -
crypto/scompress.c | 169 ++++-----
crypto/testmgr.c | 184 +---------
crypto/testmgr.h | 75 ----
crypto/zstd.c | 56 +--
drivers/block/zram/zcomp.c | 67 +++-
drivers/block/zram/zcomp.h | 7 +-
drivers/block/zram/zram_drv.c | 12 +-
drivers/crypto/cavium/zip/zip_crypto.c | 40 --
drivers/crypto/cavium/zip/zip_crypto.h | 10 -
drivers/crypto/cavium/zip/zip_main.c | 50 +--
drivers/crypto/intel/qat/qat_common/qat_bl.c | 159 --------
drivers/crypto/intel/qat/qat_common/qat_bl.h | 6 -
drivers/crypto/intel/qat/qat_common/qat_comp_algs.c | 86 +----
drivers/crypto/intel/qat/qat_common/qat_comp_req.h | 10 -
drivers/crypto/nx/nx-842.c | 34 +-
drivers/crypto/nx/nx-842.h | 14 +-
drivers/crypto/nx/nx-common-powernv.c | 30 +-
drivers/crypto/nx/nx-common-pseries.c | 32 +-
fs/ubifs/compress.c | 61 +++-
fs/ubifs/file.c | 46 +--
fs/ubifs/journal.c | 33 +-
fs/ubifs/ubifs.h | 15 +-
include/crypto/acompress.h | 21 +-
include/crypto/internal/scompress.h | 2 -
include/crypto/scatterwalk.h | 2 +-
include/linux/crypto.h | 49 +--
include/net/ipcomp.h | 4 +-
net/xfrm/xfrm_algo.c | 7 +-
net/xfrm/xfrm_ipcomp.c | 107 ++++--
44 files changed, 502 insertions(+), 1679 deletions(-)
delete mode 100644 crypto/compress.c

--
2.39.2



2023-07-18 13:04:11

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 03/21] crypto: acompress - Drop destination scatterlist allocation feature

The acomp crypto code will allocate a destination scatterlist and its
backing pages on the fly if no destination is passed. This feature is
not used, and given that the caller should own this memory, it is far
better if the caller allocates it. This is especially true for
decompression, where the output size is essentially unbounded, and so
the caller already needs to provide the size for this feature to work
reliably.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/acompress.c | 6 ----
crypto/scompress.c | 14 +---------
crypto/testmgr.c | 29 --------------------
include/crypto/acompress.h | 16 ++---------
4 files changed, 4 insertions(+), 61 deletions(-)

diff --git a/crypto/acompress.c b/crypto/acompress.c
index 1c682810a484dcdf..431876b0ee2096fd 100644
--- a/crypto/acompress.c
+++ b/crypto/acompress.c
@@ -71,7 +71,6 @@ static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)

acomp->compress = alg->compress;
acomp->decompress = alg->decompress;
- acomp->dst_free = alg->dst_free;
acomp->reqsize = alg->reqsize;

if (alg->exit)
@@ -173,11 +172,6 @@ void acomp_request_free(struct acomp_req *req)
if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
crypto_acomp_scomp_free_ctx(req);

- if (req->flags & CRYPTO_ACOMP_ALLOC_OUTPUT) {
- acomp->dst_free(req->dst);
- req->dst = NULL;
- }
-
__acomp_request_free(req);
}
EXPORT_SYMBOL_GPL(acomp_request_free);
diff --git a/crypto/scompress.c b/crypto/scompress.c
index 442a82c9de7def1f..3155cdce9116e092 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -122,12 +122,9 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
if (!req->src || !req->slen || req->slen > SCOMP_SCRATCH_SIZE)
return -EINVAL;

- if (req->dst && !req->dlen)
+ if (!req->dst || !req->dlen || req->dlen > SCOMP_SCRATCH_SIZE)
return -EINVAL;

- if (!req->dlen || req->dlen > SCOMP_SCRATCH_SIZE)
- req->dlen = SCOMP_SCRATCH_SIZE;
-
scratch = raw_cpu_ptr(&scomp_scratch);
spin_lock(&scratch->lock);

@@ -139,17 +136,9 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
ret = crypto_scomp_decompress(scomp, scratch->src, req->slen,
scratch->dst, &req->dlen, *ctx);
if (!ret) {
- if (!req->dst) {
- req->dst = sgl_alloc(req->dlen, GFP_ATOMIC, NULL);
- if (!req->dst) {
- ret = -ENOMEM;
- goto out;
- }
- }
scatterwalk_map_and_copy(scratch->dst, req->dst, 0, req->dlen,
1);
}
-out:
spin_unlock(&scratch->lock);
return ret;
}
@@ -197,7 +186,6 @@ int crypto_init_scomp_ops_async(struct crypto_tfm *tfm)

crt->compress = scomp_acomp_compress;
crt->decompress = scomp_acomp_decompress;
- crt->dst_free = sgl_free;
crt->reqsize = sizeof(void *);

return 0;
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index b41a8e8c1d1a1987..4971351f55dbabb9 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -3497,21 +3497,6 @@ static int test_acomp(struct crypto_acomp *tfm,
goto out;
}

-#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
- crypto_init_wait(&wait);
- sg_init_one(&src, input_vec, ilen);
- acomp_request_set_params(req, &src, NULL, ilen, 0);
-
- ret = crypto_wait_req(crypto_acomp_compress(req), &wait);
- if (ret) {
- pr_err("alg: acomp: compression failed on NULL dst buffer test %d for %s: ret=%d\n",
- i + 1, algo, -ret);
- kfree(input_vec);
- acomp_request_free(req);
- goto out;
- }
-#endif
-
kfree(input_vec);
acomp_request_free(req);
}
@@ -3573,20 +3558,6 @@ static int test_acomp(struct crypto_acomp *tfm,
goto out;
}

-#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
- crypto_init_wait(&wait);
- acomp_request_set_params(req, &src, NULL, ilen, 0);
-
- ret = crypto_wait_req(crypto_acomp_decompress(req), &wait);
- if (ret) {
- pr_err("alg: acomp: decompression failed on NULL dst buffer test %d for %s: ret=%d\n",
- i + 1, algo, -ret);
- kfree(input_vec);
- acomp_request_free(req);
- goto out;
- }
-#endif
-
kfree(input_vec);
acomp_request_free(req);
}
diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h
index 574cffc90730f5f3..ccb6f3279bc8b32e 100644
--- a/include/crypto/acompress.h
+++ b/include/crypto/acompress.h
@@ -43,15 +43,12 @@ struct acomp_req {
*
* @compress: Function performs a compress operation
* @decompress: Function performs a de-compress operation
- * @dst_free: Frees destination buffer if allocated inside the
- * algorithm
* @reqsize: Context size for (de)compression requests
* @base: Common crypto API algorithm data structure
*/
struct crypto_acomp {
int (*compress)(struct acomp_req *req);
int (*decompress)(struct acomp_req *req);
- void (*dst_free)(struct scatterlist *dst);
unsigned int reqsize;
struct crypto_tfm base;
};
@@ -222,8 +219,7 @@ static inline void acomp_request_set_callback(struct acomp_req *req,
{
req->base.complete = cmpl;
req->base.data = data;
- req->base.flags &= CRYPTO_ACOMP_ALLOC_OUTPUT;
- req->base.flags |= flgs & ~CRYPTO_ACOMP_ALLOC_OUTPUT;
+ req->base.flags = flgs;
}

/**
@@ -233,11 +229,9 @@ static inline void acomp_request_set_callback(struct acomp_req *req,
*
* @req: asynchronous compress request
* @src: pointer to input buffer scatterlist
- * @dst: pointer to output buffer scatterlist. If this is NULL, the
- * acomp layer will allocate the output memory
+ * @dst: pointer to output buffer scatterlist
* @slen: size of the input buffer
- * @dlen: size of the output buffer. If dst is NULL, this can be used by
- * the user to specify the maximum amount of memory to allocate
+ * @dlen: size of the output buffer
*/
static inline void acomp_request_set_params(struct acomp_req *req,
struct scatterlist *src,
@@ -249,10 +243,6 @@ static inline void acomp_request_set_params(struct acomp_req *req,
req->dst = dst;
req->slen = slen;
req->dlen = dlen;
-
- req->flags &= ~CRYPTO_ACOMP_ALLOC_OUTPUT;
- if (!req->dst)
- req->flags |= CRYPTO_ACOMP_ALLOC_OUTPUT;
}

static inline struct crypto_istat_compress *comp_get_stat(
--
2.39.2


2023-07-18 13:04:21

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 09/21] crypto: nx - Migrate to scomp API

The only remaining user of 842 compression has been migrated to the
acomp compression API, and so the NX hardware driver has to follow suit,
given that no users of the obsolete 'comp' API remain, and it is going
to be removed.

So migrate the NX driver code to scomp. These will be wrapped and
exposed as acomp implementation via the crypto subsystem's
acomp-to-scomp adaptation layer.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
drivers/crypto/nx/nx-842.c | 34 ++++++++++++--------
drivers/crypto/nx/nx-842.h | 14 ++++----
drivers/crypto/nx/nx-common-powernv.c | 30 ++++++++---------
drivers/crypto/nx/nx-common-pseries.c | 32 +++++++++---------
4 files changed, 57 insertions(+), 53 deletions(-)

diff --git a/drivers/crypto/nx/nx-842.c b/drivers/crypto/nx/nx-842.c
index 2ab90ec10e61ebe8..331b9cdf85e27044 100644
--- a/drivers/crypto/nx/nx-842.c
+++ b/drivers/crypto/nx/nx-842.c
@@ -101,9 +101,14 @@ static int update_param(struct nx842_crypto_param *p,
return 0;
}

-int nx842_crypto_init(struct crypto_tfm *tfm, struct nx842_driver *driver)
+void *nx842_crypto_alloc_ctx(struct crypto_scomp *tfm,
+ struct nx842_driver *driver)
{
- struct nx842_crypto_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct nx842_crypto_ctx *ctx;
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx)
+ return ERR_PTR(-ENOMEM);

spin_lock_init(&ctx->lock);
ctx->driver = driver;
@@ -114,22 +119,23 @@ int nx842_crypto_init(struct crypto_tfm *tfm, struct nx842_driver *driver)
kfree(ctx->wmem);
free_page((unsigned long)ctx->sbounce);
free_page((unsigned long)ctx->dbounce);
- return -ENOMEM;
+ kfree(ctx);
+ return ERR_PTR(-ENOMEM);
}

- return 0;
+ return ctx;
}
-EXPORT_SYMBOL_GPL(nx842_crypto_init);
+EXPORT_SYMBOL_GPL(nx842_crypto_alloc_ctx);

-void nx842_crypto_exit(struct crypto_tfm *tfm)
+void nx842_crypto_free_ctx(struct crypto_scomp *tfm, void *p)
{
- struct nx842_crypto_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct nx842_crypto_ctx *ctx = p;

kfree(ctx->wmem);
free_page((unsigned long)ctx->sbounce);
free_page((unsigned long)ctx->dbounce);
}
-EXPORT_SYMBOL_GPL(nx842_crypto_exit);
+EXPORT_SYMBOL_GPL(nx842_crypto_free_ctx);

static void check_constraints(struct nx842_constraints *c)
{
@@ -246,11 +252,11 @@ static int compress(struct nx842_crypto_ctx *ctx,
return update_param(p, slen, dskip + dlen);
}

-int nx842_crypto_compress(struct crypto_tfm *tfm,
+int nx842_crypto_compress(struct crypto_scomp *tfm,
const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen)
+ u8 *dst, unsigned int *dlen, void *pctx)
{
- struct nx842_crypto_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct nx842_crypto_ctx *ctx = pctx;
struct nx842_crypto_header *hdr = &ctx->header;
struct nx842_crypto_param p;
struct nx842_constraints c = *ctx->driver->constraints;
@@ -429,11 +435,11 @@ static int decompress(struct nx842_crypto_ctx *ctx,
return update_param(p, slen + padding, dlen);
}

-int nx842_crypto_decompress(struct crypto_tfm *tfm,
+int nx842_crypto_decompress(struct crypto_scomp *tfm,
const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen)
+ u8 *dst, unsigned int *dlen, void *pctx)
{
- struct nx842_crypto_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct nx842_crypto_ctx *ctx = pctx;
struct nx842_crypto_header *hdr;
struct nx842_crypto_param p;
struct nx842_constraints c = *ctx->driver->constraints;
diff --git a/drivers/crypto/nx/nx-842.h b/drivers/crypto/nx/nx-842.h
index 7590bfb24d79bf42..de9dc8df62ed9dcb 100644
--- a/drivers/crypto/nx/nx-842.h
+++ b/drivers/crypto/nx/nx-842.h
@@ -12,6 +12,7 @@
#include <linux/io.h>
#include <linux/mm.h>
#include <linux/ratelimit.h>
+#include <crypto/internal/scompress.h>

/* Restrictions on Data Descriptor List (DDL) and Entry (DDE) buffers
*
@@ -177,13 +178,14 @@ struct nx842_crypto_ctx {
struct nx842_driver *driver;
};

-int nx842_crypto_init(struct crypto_tfm *tfm, struct nx842_driver *driver);
-void nx842_crypto_exit(struct crypto_tfm *tfm);
-int nx842_crypto_compress(struct crypto_tfm *tfm,
+void *nx842_crypto_alloc_ctx(struct crypto_scomp *tfm,
+ struct nx842_driver *driver);
+void nx842_crypto_free_ctx(struct crypto_scomp *tfm, void *ctx);
+int nx842_crypto_compress(struct crypto_scomp *tfm,
const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen);
-int nx842_crypto_decompress(struct crypto_tfm *tfm,
+ u8 *dst, unsigned int *dlen, void *ctx);
+int nx842_crypto_decompress(struct crypto_scomp *tfm,
const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen);
+ u8 *dst, unsigned int *dlen, void *ctx);

#endif /* __NX_842_H__ */
diff --git a/drivers/crypto/nx/nx-common-powernv.c b/drivers/crypto/nx/nx-common-powernv.c
index 8c859872c1839eca..80023686f3e21b72 100644
--- a/drivers/crypto/nx/nx-common-powernv.c
+++ b/drivers/crypto/nx/nx-common-powernv.c
@@ -1031,23 +1031,21 @@ static struct nx842_driver nx842_powernv_driver = {
.decompress = nx842_powernv_decompress,
};

-static int nx842_powernv_crypto_init(struct crypto_tfm *tfm)
+static void *nx842_powernv_crypto_alloc_ctx(struct crypto_scomp *tfm)
{
- return nx842_crypto_init(tfm, &nx842_powernv_driver);
+ return nx842_crypto_alloc_ctx(tfm, &nx842_powernv_driver);
}

-static struct crypto_alg nx842_powernv_alg = {
- .cra_name = "842",
- .cra_driver_name = "842-nx",
- .cra_priority = 300,
- .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
- .cra_ctxsize = sizeof(struct nx842_crypto_ctx),
- .cra_module = THIS_MODULE,
- .cra_init = nx842_powernv_crypto_init,
- .cra_exit = nx842_crypto_exit,
- .cra_u = { .compress = {
- .coa_compress = nx842_crypto_compress,
- .coa_decompress = nx842_crypto_decompress } }
+static struct scomp_alg nx842_powernv_alg = {
+ .base.cra_name = "842",
+ .base.cra_driver_name = "842-nx",
+ .base.cra_priority = 300,
+ .base.cra_module = THIS_MODULE,
+
+ .alloc_ctx = nx842_powernv_crypto_alloc_ctx,
+ .free_ctx = nx842_crypto_free_ctx,
+ .compress = nx842_crypto_compress,
+ .decompress = nx842_crypto_decompress,
};

static __init int nx_compress_powernv_init(void)
@@ -1107,7 +1105,7 @@ static __init int nx_compress_powernv_init(void)
nx842_powernv_exec = nx842_exec_vas;
}

- ret = crypto_register_alg(&nx842_powernv_alg);
+ ret = crypto_register_scomp(&nx842_powernv_alg);
if (ret) {
nx_delete_coprocs();
return ret;
@@ -1128,7 +1126,7 @@ static void __exit nx_compress_powernv_exit(void)
if (!nx842_ct)
vas_unregister_api_powernv();

- crypto_unregister_alg(&nx842_powernv_alg);
+ crypto_unregister_scomp(&nx842_powernv_alg);

nx_delete_coprocs();
}
diff --git a/drivers/crypto/nx/nx-common-pseries.c b/drivers/crypto/nx/nx-common-pseries.c
index 35f2d0d8507ed774..6232901adc495ed3 100644
--- a/drivers/crypto/nx/nx-common-pseries.c
+++ b/drivers/crypto/nx/nx-common-pseries.c
@@ -1008,23 +1008,21 @@ static struct nx842_driver nx842_pseries_driver = {
.decompress = nx842_pseries_decompress,
};

-static int nx842_pseries_crypto_init(struct crypto_tfm *tfm)
+static void *nx842_pseries_crypto_alloc_ctx(struct crypto_scomp *tfm)
{
- return nx842_crypto_init(tfm, &nx842_pseries_driver);
+ return nx842_crypto_alloc_ctx(tfm, &nx842_pseries_driver);
}

-static struct crypto_alg nx842_pseries_alg = {
- .cra_name = "842",
- .cra_driver_name = "842-nx",
- .cra_priority = 300,
- .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
- .cra_ctxsize = sizeof(struct nx842_crypto_ctx),
- .cra_module = THIS_MODULE,
- .cra_init = nx842_pseries_crypto_init,
- .cra_exit = nx842_crypto_exit,
- .cra_u = { .compress = {
- .coa_compress = nx842_crypto_compress,
- .coa_decompress = nx842_crypto_decompress } }
+static struct scomp_alg nx842_pseries_alg = {
+ .base.cra_name = "842",
+ .base.cra_driver_name = "842-nx",
+ .base.cra_priority = 300,
+ .base.cra_module = THIS_MODULE,
+
+ .alloc_ctx = nx842_pseries_crypto_alloc_ctx,
+ .free_ctx = nx842_crypto_free_ctx,
+ .compress = nx842_crypto_compress,
+ .decompress = nx842_crypto_decompress,
};

static int nx842_probe(struct vio_dev *viodev,
@@ -1072,7 +1070,7 @@ static int nx842_probe(struct vio_dev *viodev,
if (ret)
goto error;

- ret = crypto_register_alg(&nx842_pseries_alg);
+ ret = crypto_register_scomp(&nx842_pseries_alg);
if (ret) {
dev_err(&viodev->dev, "could not register comp alg: %d\n", ret);
goto error;
@@ -1120,7 +1118,7 @@ static void nx842_remove(struct vio_dev *viodev)
if (caps_feat)
sysfs_remove_group(&viodev->dev.kobj, &nxcop_caps_attr_group);

- crypto_unregister_alg(&nx842_pseries_alg);
+ crypto_unregister_scomp(&nx842_pseries_alg);

spin_lock_irqsave(&devdata_mutex, flags);
old_devdata = rcu_dereference_check(devdata,
@@ -1255,7 +1253,7 @@ static void __exit nx842_pseries_exit(void)

vas_unregister_api_pseries();

- crypto_unregister_alg(&nx842_pseries_alg);
+ crypto_unregister_scomp(&nx842_pseries_alg);

spin_lock_irqsave(&devdata_mutex, flags);
old_devdata = rcu_dereference_check(devdata,
--
2.39.2


2023-07-18 13:04:41

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 10/21] crypto: 842 - drop obsolete 'comp' implementation

The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/842.c | 63 +-------------------
1 file changed, 1 insertion(+), 62 deletions(-)

diff --git a/crypto/842.c b/crypto/842.c
index e59e54d769609ba6..5001d88cf727f74e 100644
--- a/crypto/842.c
+++ b/crypto/842.c
@@ -39,38 +39,11 @@ static void *crypto842_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}

-static int crypto842_init(struct crypto_tfm *tfm)
-{
- struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
-
- ctx->wmem = crypto842_alloc_ctx(NULL);
- if (IS_ERR(ctx->wmem))
- return -ENOMEM;
-
- return 0;
-}
-
static void crypto842_free_ctx(struct crypto_scomp *tfm, void *ctx)
{
kfree(ctx);
}

-static void crypto842_exit(struct crypto_tfm *tfm)
-{
- struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
-
- crypto842_free_ctx(NULL, ctx->wmem);
-}
-
-static int crypto842_compress(struct crypto_tfm *tfm,
- const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen)
-{
- struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
-
- return sw842_compress(src, slen, dst, dlen, ctx->wmem);
-}
-
static int crypto842_scompress(struct crypto_scomp *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
@@ -78,13 +51,6 @@ static int crypto842_scompress(struct crypto_scomp *tfm,
return sw842_compress(src, slen, dst, dlen, ctx);
}

-static int crypto842_decompress(struct crypto_tfm *tfm,
- const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen)
-{
- return sw842_decompress(src, slen, dst, dlen);
-}
-
static int crypto842_sdecompress(struct crypto_scomp *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
@@ -92,20 +58,6 @@ static int crypto842_sdecompress(struct crypto_scomp *tfm,
return sw842_decompress(src, slen, dst, dlen);
}

-static struct crypto_alg alg = {
- .cra_name = "842",
- .cra_driver_name = "842-generic",
- .cra_priority = 100,
- .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
- .cra_ctxsize = sizeof(struct crypto842_ctx),
- .cra_module = THIS_MODULE,
- .cra_init = crypto842_init,
- .cra_exit = crypto842_exit,
- .cra_u = { .compress = {
- .coa_compress = crypto842_compress,
- .coa_decompress = crypto842_decompress } }
-};
-
static struct scomp_alg scomp = {
.alloc_ctx = crypto842_alloc_ctx,
.free_ctx = crypto842_free_ctx,
@@ -121,25 +73,12 @@ static struct scomp_alg scomp = {

static int __init crypto842_mod_init(void)
{
- int ret;
-
- ret = crypto_register_alg(&alg);
- if (ret)
- return ret;
-
- ret = crypto_register_scomp(&scomp);
- if (ret) {
- crypto_unregister_alg(&alg);
- return ret;
- }
-
- return ret;
+ return crypto_register_scomp(&scomp);
}
subsys_initcall(crypto842_mod_init);

static void __exit crypto842_mod_exit(void)
{
- crypto_unregister_alg(&alg);
crypto_unregister_scomp(&scomp);
}
module_exit(crypto842_mod_exit);
--
2.39.2


2023-07-18 13:04:43

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 11/21] crypto: deflate - drop obsolete 'comp' implementation

No users of the obsolete 'comp' crypto compression API remain, so let's
drop the software deflate version of it.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/deflate.c | 58 +-------------------
1 file changed, 1 insertion(+), 57 deletions(-)

diff --git a/crypto/deflate.c b/crypto/deflate.c
index f4f127078fe2a5aa..0955040ca9e64146 100644
--- a/crypto/deflate.c
+++ b/crypto/deflate.c
@@ -130,13 +130,6 @@ static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}

-static int deflate_init(struct crypto_tfm *tfm)
-{
- struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
-
- return __deflate_init(ctx);
-}
-
static void __deflate_exit(void *ctx)
{
deflate_comp_exit(ctx);
@@ -149,13 +142,6 @@ static void deflate_free_ctx(struct crypto_scomp *tfm, void *ctx)
kfree_sensitive(ctx);
}

-static void deflate_exit(struct crypto_tfm *tfm)
-{
- struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
-
- __deflate_exit(ctx);
-}
-
static int __deflate_compress(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
@@ -185,14 +171,6 @@ static int __deflate_compress(const u8 *src, unsigned int slen,
return ret;
}

-static int deflate_compress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
-{
- struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
-
- return __deflate_compress(src, slen, dst, dlen, dctx);
-}
-
static int deflate_scompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@@ -241,14 +219,6 @@ static int __deflate_decompress(const u8 *src, unsigned int slen,
return ret;
}

-static int deflate_decompress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
-{
- struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
-
- return __deflate_decompress(src, slen, dst, dlen, dctx);
-}
-
static int deflate_sdecompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@@ -256,19 +226,6 @@ static int deflate_sdecompress(struct crypto_scomp *tfm, const u8 *src,
return __deflate_decompress(src, slen, dst, dlen, ctx);
}

-static struct crypto_alg alg = {
- .cra_name = "deflate",
- .cra_driver_name = "deflate-generic",
- .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
- .cra_ctxsize = sizeof(struct deflate_ctx),
- .cra_module = THIS_MODULE,
- .cra_init = deflate_init,
- .cra_exit = deflate_exit,
- .cra_u = { .compress = {
- .coa_compress = deflate_compress,
- .coa_decompress = deflate_decompress } }
-};
-
static struct scomp_alg scomp = {
.alloc_ctx = deflate_alloc_ctx,
.free_ctx = deflate_free_ctx,
@@ -283,24 +240,11 @@ static struct scomp_alg scomp = {

static int __init deflate_mod_init(void)
{
- int ret;
-
- ret = crypto_register_alg(&alg);
- if (ret)
- return ret;
-
- ret = crypto_register_scomp(&scomp);
- if (ret) {
- crypto_unregister_alg(&alg);
- return ret;
- }
-
- return ret;
+ return crypto_register_scomp(&scomp);
}

static void __exit deflate_mod_fini(void)
{
- crypto_unregister_alg(&alg);
crypto_unregister_scomp(&scomp);
}

--
2.39.2


2023-07-18 13:04:53

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 14/21] crypto: lzo-rle - drop obsolete 'comp' implementation

The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/lzo-rle.c | 60 +-------------------
1 file changed, 1 insertion(+), 59 deletions(-)

diff --git a/crypto/lzo-rle.c b/crypto/lzo-rle.c
index 0631d975bfac1129..658d6aa46fe21e19 100644
--- a/crypto/lzo-rle.c
+++ b/crypto/lzo-rle.c
@@ -26,29 +26,11 @@ static void *lzorle_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}

-static int lzorle_init(struct crypto_tfm *tfm)
-{
- struct lzorle_ctx *ctx = crypto_tfm_ctx(tfm);
-
- ctx->lzorle_comp_mem = lzorle_alloc_ctx(NULL);
- if (IS_ERR(ctx->lzorle_comp_mem))
- return -ENOMEM;
-
- return 0;
-}
-
static void lzorle_free_ctx(struct crypto_scomp *tfm, void *ctx)
{
kvfree(ctx);
}

-static void lzorle_exit(struct crypto_tfm *tfm)
-{
- struct lzorle_ctx *ctx = crypto_tfm_ctx(tfm);
-
- lzorle_free_ctx(NULL, ctx->lzorle_comp_mem);
-}
-
static int __lzorle_compress(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
@@ -64,14 +46,6 @@ static int __lzorle_compress(const u8 *src, unsigned int slen,
return 0;
}

-static int lzorle_compress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
-{
- struct lzorle_ctx *ctx = crypto_tfm_ctx(tfm);
-
- return __lzorle_compress(src, slen, dst, dlen, ctx->lzorle_comp_mem);
-}
-
static int lzorle_scompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@@ -94,12 +68,6 @@ static int __lzorle_decompress(const u8 *src, unsigned int slen,
return 0;
}

-static int lzorle_decompress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
-{
- return __lzorle_decompress(src, slen, dst, dlen);
-}
-
static int lzorle_sdecompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@@ -107,19 +75,6 @@ static int lzorle_sdecompress(struct crypto_scomp *tfm, const u8 *src,
return __lzorle_decompress(src, slen, dst, dlen);
}

-static struct crypto_alg alg = {
- .cra_name = "lzo-rle",
- .cra_driver_name = "lzo-rle-generic",
- .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
- .cra_ctxsize = sizeof(struct lzorle_ctx),
- .cra_module = THIS_MODULE,
- .cra_init = lzorle_init,
- .cra_exit = lzorle_exit,
- .cra_u = { .compress = {
- .coa_compress = lzorle_compress,
- .coa_decompress = lzorle_decompress } }
-};
-
static struct scomp_alg scomp = {
.alloc_ctx = lzorle_alloc_ctx,
.free_ctx = lzorle_free_ctx,
@@ -134,24 +89,11 @@ static struct scomp_alg scomp = {

static int __init lzorle_mod_init(void)
{
- int ret;
-
- ret = crypto_register_alg(&alg);
- if (ret)
- return ret;
-
- ret = crypto_register_scomp(&scomp);
- if (ret) {
- crypto_unregister_alg(&alg);
- return ret;
- }
-
- return ret;
+ return crypto_register_scomp(&scomp);
}

static void __exit lzorle_mod_fini(void)
{
- crypto_unregister_alg(&alg);
crypto_unregister_scomp(&scomp);
}

--
2.39.2


2023-07-18 13:05:12

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 13/21] crypto: lz4hc - drop obsolete 'comp' implementation

The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/lz4hc.c | 63 +-------------------
1 file changed, 1 insertion(+), 62 deletions(-)

diff --git a/crypto/lz4hc.c b/crypto/lz4hc.c
index d7cc94aa2fcf42fa..5d6b13319f5e7683 100644
--- a/crypto/lz4hc.c
+++ b/crypto/lz4hc.c
@@ -26,29 +26,11 @@ static void *lz4hc_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}

-static int lz4hc_init(struct crypto_tfm *tfm)
-{
- struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
-
- ctx->lz4hc_comp_mem = lz4hc_alloc_ctx(NULL);
- if (IS_ERR(ctx->lz4hc_comp_mem))
- return -ENOMEM;
-
- return 0;
-}
-
static void lz4hc_free_ctx(struct crypto_scomp *tfm, void *ctx)
{
vfree(ctx);
}

-static void lz4hc_exit(struct crypto_tfm *tfm)
-{
- struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
-
- lz4hc_free_ctx(NULL, ctx->lz4hc_comp_mem);
-}
-
static int __lz4hc_compress_crypto(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
@@ -69,16 +51,6 @@ static int lz4hc_scompress(struct crypto_scomp *tfm, const u8 *src,
return __lz4hc_compress_crypto(src, slen, dst, dlen, ctx);
}

-static int lz4hc_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst,
- unsigned int *dlen)
-{
- struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
-
- return __lz4hc_compress_crypto(src, slen, dst, dlen,
- ctx->lz4hc_comp_mem);
-}
-
static int __lz4hc_decompress_crypto(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
@@ -98,26 +70,6 @@ static int lz4hc_sdecompress(struct crypto_scomp *tfm, const u8 *src,
return __lz4hc_decompress_crypto(src, slen, dst, dlen, NULL);
}

-static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst,
- unsigned int *dlen)
-{
- return __lz4hc_decompress_crypto(src, slen, dst, dlen, NULL);
-}
-
-static struct crypto_alg alg_lz4hc = {
- .cra_name = "lz4hc",
- .cra_driver_name = "lz4hc-generic",
- .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
- .cra_ctxsize = sizeof(struct lz4hc_ctx),
- .cra_module = THIS_MODULE,
- .cra_init = lz4hc_init,
- .cra_exit = lz4hc_exit,
- .cra_u = { .compress = {
- .coa_compress = lz4hc_compress_crypto,
- .coa_decompress = lz4hc_decompress_crypto } }
-};
-
static struct scomp_alg scomp = {
.alloc_ctx = lz4hc_alloc_ctx,
.free_ctx = lz4hc_free_ctx,
@@ -132,24 +84,11 @@ static struct scomp_alg scomp = {

static int __init lz4hc_mod_init(void)
{
- int ret;
-
- ret = crypto_register_alg(&alg_lz4hc);
- if (ret)
- return ret;
-
- ret = crypto_register_scomp(&scomp);
- if (ret) {
- crypto_unregister_alg(&alg_lz4hc);
- return ret;
- }
-
- return ret;
+ return crypto_register_scomp(&scomp);
}

static void __exit lz4hc_mod_fini(void)
{
- crypto_unregister_alg(&alg_lz4hc);
crypto_unregister_scomp(&scomp);
}

--
2.39.2


2023-07-18 13:05:23

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 16/21] crypto: zstd - drop obsolete 'comp' implementation

The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/zstd.c | 56 +-------------------
1 file changed, 1 insertion(+), 55 deletions(-)

diff --git a/crypto/zstd.c b/crypto/zstd.c
index 154a969c83a82277..c6e6f135c5812c9c 100644
--- a/crypto/zstd.c
+++ b/crypto/zstd.c
@@ -121,13 +121,6 @@ static void *zstd_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}

-static int zstd_init(struct crypto_tfm *tfm)
-{
- struct zstd_ctx *ctx = crypto_tfm_ctx(tfm);
-
- return __zstd_init(ctx);
-}
-
static void __zstd_exit(void *ctx)
{
zstd_comp_exit(ctx);
@@ -140,13 +133,6 @@ static void zstd_free_ctx(struct crypto_scomp *tfm, void *ctx)
kfree_sensitive(ctx);
}

-static void zstd_exit(struct crypto_tfm *tfm)
-{
- struct zstd_ctx *ctx = crypto_tfm_ctx(tfm);
-
- __zstd_exit(ctx);
-}
-
static int __zstd_compress(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
@@ -161,14 +147,6 @@ static int __zstd_compress(const u8 *src, unsigned int slen,
return 0;
}

-static int zstd_compress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
-{
- struct zstd_ctx *ctx = crypto_tfm_ctx(tfm);
-
- return __zstd_compress(src, slen, dst, dlen, ctx);
-}
-
static int zstd_scompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@@ -189,14 +167,6 @@ static int __zstd_decompress(const u8 *src, unsigned int slen,
return 0;
}

-static int zstd_decompress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
-{
- struct zstd_ctx *ctx = crypto_tfm_ctx(tfm);
-
- return __zstd_decompress(src, slen, dst, dlen, ctx);
-}
-
static int zstd_sdecompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@@ -204,19 +174,6 @@ static int zstd_sdecompress(struct crypto_scomp *tfm, const u8 *src,
return __zstd_decompress(src, slen, dst, dlen, ctx);
}

-static struct crypto_alg alg = {
- .cra_name = "zstd",
- .cra_driver_name = "zstd-generic",
- .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
- .cra_ctxsize = sizeof(struct zstd_ctx),
- .cra_module = THIS_MODULE,
- .cra_init = zstd_init,
- .cra_exit = zstd_exit,
- .cra_u = { .compress = {
- .coa_compress = zstd_compress,
- .coa_decompress = zstd_decompress } }
-};
-
static struct scomp_alg scomp = {
.alloc_ctx = zstd_alloc_ctx,
.free_ctx = zstd_free_ctx,
@@ -231,22 +188,11 @@ static struct scomp_alg scomp = {

static int __init zstd_mod_init(void)
{
- int ret;
-
- ret = crypto_register_alg(&alg);
- if (ret)
- return ret;
-
- ret = crypto_register_scomp(&scomp);
- if (ret)
- crypto_unregister_alg(&alg);
-
- return ret;
+ return crypto_register_scomp(&scomp);
}

static void __exit zstd_mod_fini(void)
{
- crypto_unregister_alg(&alg);
crypto_unregister_scomp(&scomp);
}

--
2.39.2


2023-07-18 13:05:32

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 15/21] crypto: lzo - drop obsolete 'comp' implementation

The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/lzo.c | 60 +-------------------
1 file changed, 1 insertion(+), 59 deletions(-)

diff --git a/crypto/lzo.c b/crypto/lzo.c
index ebda132dd22bf543..52558f9d41f3dcea 100644
--- a/crypto/lzo.c
+++ b/crypto/lzo.c
@@ -26,29 +26,11 @@ static void *lzo_alloc_ctx(struct crypto_scomp *tfm)
return ctx;
}

-static int lzo_init(struct crypto_tfm *tfm)
-{
- struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
-
- ctx->lzo_comp_mem = lzo_alloc_ctx(NULL);
- if (IS_ERR(ctx->lzo_comp_mem))
- return -ENOMEM;
-
- return 0;
-}
-
static void lzo_free_ctx(struct crypto_scomp *tfm, void *ctx)
{
kvfree(ctx);
}

-static void lzo_exit(struct crypto_tfm *tfm)
-{
- struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
-
- lzo_free_ctx(NULL, ctx->lzo_comp_mem);
-}
-
static int __lzo_compress(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
@@ -64,14 +46,6 @@ static int __lzo_compress(const u8 *src, unsigned int slen,
return 0;
}

-static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
-{
- struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
-
- return __lzo_compress(src, slen, dst, dlen, ctx->lzo_comp_mem);
-}
-
static int lzo_scompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@@ -94,12 +68,6 @@ static int __lzo_decompress(const u8 *src, unsigned int slen,
return 0;
}

-static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
-{
- return __lzo_decompress(src, slen, dst, dlen);
-}
-
static int lzo_sdecompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
@@ -107,19 +75,6 @@ static int lzo_sdecompress(struct crypto_scomp *tfm, const u8 *src,
return __lzo_decompress(src, slen, dst, dlen);
}

-static struct crypto_alg alg = {
- .cra_name = "lzo",
- .cra_driver_name = "lzo-generic",
- .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
- .cra_ctxsize = sizeof(struct lzo_ctx),
- .cra_module = THIS_MODULE,
- .cra_init = lzo_init,
- .cra_exit = lzo_exit,
- .cra_u = { .compress = {
- .coa_compress = lzo_compress,
- .coa_decompress = lzo_decompress } }
-};
-
static struct scomp_alg scomp = {
.alloc_ctx = lzo_alloc_ctx,
.free_ctx = lzo_free_ctx,
@@ -134,24 +89,11 @@ static struct scomp_alg scomp = {

static int __init lzo_mod_init(void)
{
- int ret;
-
- ret = crypto_register_alg(&alg);
- if (ret)
- return ret;
-
- ret = crypto_register_scomp(&scomp);
- if (ret) {
- crypto_unregister_alg(&alg);
- return ret;
- }
-
- return ret;
+ return crypto_register_scomp(&scomp);
}

static void __exit lzo_mod_fini(void)
{
- crypto_unregister_alg(&alg);
crypto_unregister_scomp(&scomp);
}

--
2.39.2


2023-07-18 13:05:40

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 17/21] crypto: cavium/zip - drop obsolete 'comp' implementation

The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
drivers/crypto/cavium/zip/zip_crypto.c | 40 ----------------
drivers/crypto/cavium/zip/zip_crypto.h | 10 ----
drivers/crypto/cavium/zip/zip_main.c | 50 +-------------------
3 files changed, 1 insertion(+), 99 deletions(-)

diff --git a/drivers/crypto/cavium/zip/zip_crypto.c b/drivers/crypto/cavium/zip/zip_crypto.c
index 1046a746d36f551c..5edad3b1d1dc8398 100644
--- a/drivers/crypto/cavium/zip/zip_crypto.c
+++ b/drivers/crypto/cavium/zip/zip_crypto.c
@@ -195,46 +195,6 @@ static int zip_decompress(const u8 *src, unsigned int slen,
return ret;
}

-/* Legacy Compress framework start */
-int zip_alloc_comp_ctx_deflate(struct crypto_tfm *tfm)
-{
- struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm);
-
- return zip_ctx_init(zip_ctx, 0);
-}
-
-int zip_alloc_comp_ctx_lzs(struct crypto_tfm *tfm)
-{
- struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm);
-
- return zip_ctx_init(zip_ctx, 1);
-}
-
-void zip_free_comp_ctx(struct crypto_tfm *tfm)
-{
- struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm);
-
- zip_ctx_exit(zip_ctx);
-}
-
-int zip_comp_compress(struct crypto_tfm *tfm,
- const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen)
-{
- struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm);
-
- return zip_compress(src, slen, dst, dlen, zip_ctx);
-}
-
-int zip_comp_decompress(struct crypto_tfm *tfm,
- const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen)
-{
- struct zip_kernel_ctx *zip_ctx = crypto_tfm_ctx(tfm);
-
- return zip_decompress(src, slen, dst, dlen, zip_ctx);
-} /* Legacy compress framework end */
-
/* SCOMP framework start */
void *zip_alloc_scomp_ctx_deflate(struct crypto_scomp *tfm)
{
diff --git a/drivers/crypto/cavium/zip/zip_crypto.h b/drivers/crypto/cavium/zip/zip_crypto.h
index b59ddfcacd34447e..a1ae3825fb65c3b6 100644
--- a/drivers/crypto/cavium/zip/zip_crypto.h
+++ b/drivers/crypto/cavium/zip/zip_crypto.h
@@ -57,16 +57,6 @@ struct zip_kernel_ctx {
struct zip_operation zip_decomp;
};

-int zip_alloc_comp_ctx_deflate(struct crypto_tfm *tfm);
-int zip_alloc_comp_ctx_lzs(struct crypto_tfm *tfm);
-void zip_free_comp_ctx(struct crypto_tfm *tfm);
-int zip_comp_compress(struct crypto_tfm *tfm,
- const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen);
-int zip_comp_decompress(struct crypto_tfm *tfm,
- const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen);
-
void *zip_alloc_scomp_ctx_deflate(struct crypto_scomp *tfm);
void *zip_alloc_scomp_ctx_lzs(struct crypto_scomp *tfm);
void zip_free_scomp_ctx(struct crypto_scomp *tfm, void *zip_ctx);
diff --git a/drivers/crypto/cavium/zip/zip_main.c b/drivers/crypto/cavium/zip/zip_main.c
index dc5b7bf7e1fd9867..abd58de4343ddd8e 100644
--- a/drivers/crypto/cavium/zip/zip_main.c
+++ b/drivers/crypto/cavium/zip/zip_main.c
@@ -371,36 +371,6 @@ static struct pci_driver zip_driver = {

/* Kernel Crypto Subsystem Interface */

-static struct crypto_alg zip_comp_deflate = {
- .cra_name = "deflate",
- .cra_driver_name = "deflate-cavium",
- .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
- .cra_ctxsize = sizeof(struct zip_kernel_ctx),
- .cra_priority = 300,
- .cra_module = THIS_MODULE,
- .cra_init = zip_alloc_comp_ctx_deflate,
- .cra_exit = zip_free_comp_ctx,
- .cra_u = { .compress = {
- .coa_compress = zip_comp_compress,
- .coa_decompress = zip_comp_decompress
- } }
-};
-
-static struct crypto_alg zip_comp_lzs = {
- .cra_name = "lzs",
- .cra_driver_name = "lzs-cavium",
- .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
- .cra_ctxsize = sizeof(struct zip_kernel_ctx),
- .cra_priority = 300,
- .cra_module = THIS_MODULE,
- .cra_init = zip_alloc_comp_ctx_lzs,
- .cra_exit = zip_free_comp_ctx,
- .cra_u = { .compress = {
- .coa_compress = zip_comp_compress,
- .coa_decompress = zip_comp_decompress
- } }
-};
-
static struct scomp_alg zip_scomp_deflate = {
.alloc_ctx = zip_alloc_scomp_ctx_deflate,
.free_ctx = zip_free_scomp_ctx,
@@ -431,22 +401,10 @@ static int zip_register_compression_device(void)
{
int ret;

- ret = crypto_register_alg(&zip_comp_deflate);
- if (ret < 0) {
- zip_err("Deflate algorithm registration failed\n");
- return ret;
- }
-
- ret = crypto_register_alg(&zip_comp_lzs);
- if (ret < 0) {
- zip_err("LZS algorithm registration failed\n");
- goto err_unregister_alg_deflate;
- }
-
ret = crypto_register_scomp(&zip_scomp_deflate);
if (ret < 0) {
zip_err("Deflate scomp algorithm registration failed\n");
- goto err_unregister_alg_lzs;
+ return ret;
}

ret = crypto_register_scomp(&zip_scomp_lzs);
@@ -459,18 +417,12 @@ static int zip_register_compression_device(void)

err_unregister_scomp_deflate:
crypto_unregister_scomp(&zip_scomp_deflate);
-err_unregister_alg_lzs:
- crypto_unregister_alg(&zip_comp_lzs);
-err_unregister_alg_deflate:
- crypto_unregister_alg(&zip_comp_deflate);

return ret;
}

static void zip_unregister_compression_device(void)
{
- crypto_unregister_alg(&zip_comp_deflate);
- crypto_unregister_alg(&zip_comp_lzs);
crypto_unregister_scomp(&zip_scomp_deflate);
crypto_unregister_scomp(&zip_scomp_lzs);
}
--
2.39.2


2023-07-18 13:05:46

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 18/21] crypto: compress_null - drop obsolete 'comp' implementation

The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/crypto_null.c | 31 ++++----------------
crypto/testmgr.c | 3 --
2 files changed, 5 insertions(+), 29 deletions(-)

diff --git a/crypto/crypto_null.c b/crypto/crypto_null.c
index 5b84b0f7cc178fcd..75e73b1d6df01cc6 100644
--- a/crypto/crypto_null.c
+++ b/crypto/crypto_null.c
@@ -24,16 +24,6 @@ static DEFINE_MUTEX(crypto_default_null_skcipher_lock);
static struct crypto_sync_skcipher *crypto_default_null_skcipher;
static int crypto_default_null_skcipher_refcnt;

-static int null_compress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
-{
- if (slen > *dlen)
- return -EINVAL;
- memcpy(dst, src, slen);
- *dlen = slen;
- return 0;
-}
-
static int null_init(struct shash_desc *desc)
{
return 0;
@@ -121,7 +111,7 @@ static struct skcipher_alg skcipher_null = {
.decrypt = null_skcipher_crypt,
};

-static struct crypto_alg null_algs[] = { {
+static struct crypto_alg cipher_null = {
.cra_name = "cipher_null",
.cra_driver_name = "cipher_null-generic",
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
@@ -134,19 +124,8 @@ static struct crypto_alg null_algs[] = { {
.cia_setkey = null_setkey,
.cia_encrypt = null_crypt,
.cia_decrypt = null_crypt } }
-}, {
- .cra_name = "compress_null",
- .cra_driver_name = "compress_null-generic",
- .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
- .cra_blocksize = NULL_BLOCK_SIZE,
- .cra_ctxsize = 0,
- .cra_module = THIS_MODULE,
- .cra_u = { .compress = {
- .coa_compress = null_compress,
- .coa_decompress = null_compress } }
-} };
+};

-MODULE_ALIAS_CRYPTO("compress_null");
MODULE_ALIAS_CRYPTO("digest_null");
MODULE_ALIAS_CRYPTO("cipher_null");

@@ -189,7 +168,7 @@ static int __init crypto_null_mod_init(void)
{
int ret = 0;

- ret = crypto_register_algs(null_algs, ARRAY_SIZE(null_algs));
+ ret = crypto_register_alg(&cipher_null);
if (ret < 0)
goto out;

@@ -206,14 +185,14 @@ static int __init crypto_null_mod_init(void)
out_unregister_shash:
crypto_unregister_shash(&digest_null);
out_unregister_algs:
- crypto_unregister_algs(null_algs, ARRAY_SIZE(null_algs));
+ crypto_unregister_alg(&cipher_null);
out:
return ret;
}

static void __exit crypto_null_mod_fini(void)
{
- crypto_unregister_algs(null_algs, ARRAY_SIZE(null_algs));
+ crypto_unregister_alg(&cipher_null);
crypto_unregister_shash(&digest_null);
crypto_unregister_skcipher(&skcipher_null);
}
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 4971351f55dbabb9..e4b6d67233763193 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -4633,9 +4633,6 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = {
.hash = __VECS(sm4_cmac128_tv_template)
}
- }, {
- .alg = "compress_null",
- .test = alg_test_null,
}, {
.alg = "crc32",
.test = alg_test_hash,
--
2.39.2


2023-07-18 13:06:13

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 20/21] crypto: deflate - implement acomp API directly

Drop the scomp implementation of deflate, which can only operate on
contiguous in- and output buffer, and replace it with an implementation
of acomp directly. This implementation walks the scatterlists, removing
the need for the caller to use scratch buffers to present the input and
output in a contiguous manner.

This is intended for use by the IPcomp code, which currently needs to
'linearize' SKBs in order for the compression to be able to consume the
input in a single chunk.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/deflate.c | 315 +++++++-------------
include/crypto/scatterwalk.h | 2 +-
2 files changed, 113 insertions(+), 204 deletions(-)

diff --git a/crypto/deflate.c b/crypto/deflate.c
index 0955040ca9e64146..112683473df2b588 100644
--- a/crypto/deflate.c
+++ b/crypto/deflate.c
@@ -6,246 +6,154 @@
* by IPCOMP (RFC 3173 & RFC 2394).
*
* Copyright (c) 2003 James Morris <[email protected]>
- *
- * FIXME: deflate transforms will require up to a total of about 436k of kernel
- * memory on i386 (390k for compression, the rest for decompression), as the
- * current zlib kernel code uses a worst case pre-allocation system by default.
- * This needs to be fixed so that the amount of memory required is properly
- * related to the winbits and memlevel parameters.
- *
- * The default winbits of 11 should suit most packets, and it may be something
- * to configure on a per-tfm basis in the future.
- *
- * Currently, compression history is not maintained between tfm calls, as
- * it is not needed for IPCOMP and keeps the code simpler. It can be
- * implemented if someone wants it.
+ * Copyright (c) 2023 Google, LLC. <[email protected]>
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/crypto.h>
#include <linux/zlib.h>
-#include <linux/vmalloc.h>
-#include <linux/interrupt.h>
-#include <linux/mm.h>
#include <linux/net.h>
-#include <crypto/internal/scompress.h>
+#include <linux/scatterlist.h>
+#include <crypto/scatterwalk.h>
+#include <crypto/internal/acompress.h>

#define DEFLATE_DEF_LEVEL Z_DEFAULT_COMPRESSION
#define DEFLATE_DEF_WINBITS 11
#define DEFLATE_DEF_MEMLEVEL MAX_MEM_LEVEL

-struct deflate_ctx {
- struct z_stream_s comp_stream;
- struct z_stream_s decomp_stream;
+struct deflate_req_ctx {
+ struct z_stream_s stream;
+ u8 workspace[];
};

-static int deflate_comp_init(struct deflate_ctx *ctx)
+static int deflate_process(struct acomp_req *req, struct z_stream_s *stream,
+ int (*process)(struct z_stream_s *, int))
{
- int ret = 0;
- struct z_stream_s *stream = &ctx->comp_stream;
+ unsigned int slen = req->slen;
+ unsigned int dlen = req->dlen;
+ struct scatter_walk src, dst;
+ unsigned int scur, dcur;
+ int ret;

- stream->workspace = vzalloc(zlib_deflate_workspacesize(
- -DEFLATE_DEF_WINBITS, DEFLATE_DEF_MEMLEVEL));
- if (!stream->workspace) {
- ret = -ENOMEM;
- goto out;
- }
+ stream->avail_in = stream->avail_out = 0;
+
+ scatterwalk_start(&src, req->src);
+ scatterwalk_start(&dst, req->dst);
+
+ scur = dcur = 0;
+
+ do {
+ if (stream->avail_in == 0) {
+ if (scur) {
+ slen -= scur;
+
+ scatterwalk_unmap(stream->next_in - scur);
+ scatterwalk_advance(&src, scur);
+ scatterwalk_done(&src, 0, slen);
+ }
+
+ scur = scatterwalk_clamp(&src, slen);
+ if (scur) {
+ stream->next_in = scatterwalk_map(&src);
+ stream->avail_in = scur;
+ }
+ }
+
+ if (stream->avail_out == 0) {
+ if (dcur) {
+ dlen -= dcur;
+
+ scatterwalk_unmap(stream->next_out - dcur);
+ scatterwalk_advance(&dst, dcur);
+ scatterwalk_done(&dst, 1, dlen);
+ }
+
+ dcur = scatterwalk_clamp(&dst, dlen);
+ if (!dcur)
+ break;
+
+ stream->next_out = scatterwalk_map(&dst);
+ stream->avail_out = dcur;
+ }
+
+ ret = process(stream, (slen == scur) ? Z_FINISH : Z_SYNC_FLUSH);
+ } while (ret == Z_OK);
+
+ if (scur)
+ scatterwalk_unmap(stream->next_in - scur);
+ if (dcur)
+ scatterwalk_unmap(stream->next_out - dcur);
+
+ if (ret != Z_STREAM_END)
+ return -EINVAL;
+
+ req->dlen = stream->total_out;
+ return 0;
+}
+
+static int deflate_compress(struct acomp_req *req)
+{
+ struct deflate_req_ctx *ctx = acomp_request_ctx(req);
+ struct z_stream_s *stream = &ctx->stream;
+ int ret;
+
+ if (!req->src || !req->slen || !req->dst || !req->dlen)
+ return -EINVAL;
+
+ stream->workspace = ctx->workspace;
ret = zlib_deflateInit2(stream, DEFLATE_DEF_LEVEL, Z_DEFLATED,
-DEFLATE_DEF_WINBITS, DEFLATE_DEF_MEMLEVEL,
Z_DEFAULT_STRATEGY);
- if (ret != Z_OK) {
- ret = -EINVAL;
- goto out_free;
- }
-out:
+ if (ret != Z_OK)
+ return -EINVAL;
+
+ ret = deflate_process(req, stream, zlib_deflate);
+ zlib_deflateEnd(stream);
return ret;
-out_free:
- vfree(stream->workspace);
- goto out;
}

-static int deflate_decomp_init(struct deflate_ctx *ctx)
+static int deflate_decompress(struct acomp_req *req)
{
- int ret = 0;
- struct z_stream_s *stream = &ctx->decomp_stream;
+ struct deflate_req_ctx *ctx = acomp_request_ctx(req);
+ struct z_stream_s *stream = &ctx->stream;
+ int ret;

- stream->workspace = vzalloc(zlib_inflate_workspacesize());
- if (!stream->workspace) {
- ret = -ENOMEM;
- goto out;
- }
+ if (!req->src || !req->slen || !req->dst || !req->dlen)
+ return -EINVAL;
+
+ stream->workspace = ctx->workspace;
ret = zlib_inflateInit2(stream, -DEFLATE_DEF_WINBITS);
- if (ret != Z_OK) {
- ret = -EINVAL;
- goto out_free;
- }
-out:
- return ret;
-out_free:
- vfree(stream->workspace);
- goto out;
-}
+ if (ret != Z_OK)
+ return -EINVAL;

-static void deflate_comp_exit(struct deflate_ctx *ctx)
-{
- zlib_deflateEnd(&ctx->comp_stream);
- vfree(ctx->comp_stream.workspace);
-}
-
-static void deflate_decomp_exit(struct deflate_ctx *ctx)
-{
- zlib_inflateEnd(&ctx->decomp_stream);
- vfree(ctx->decomp_stream.workspace);
-}
-
-static int __deflate_init(void *ctx)
-{
- int ret;
-
- ret = deflate_comp_init(ctx);
- if (ret)
- goto out;
- ret = deflate_decomp_init(ctx);
- if (ret)
- deflate_comp_exit(ctx);
-out:
+ ret = deflate_process(req, stream, zlib_inflate);
+ req->dlen = stream->total_out;
+ zlib_inflateEnd(stream);
return ret;
}

-static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
-{
- struct deflate_ctx *ctx;
- int ret;
+static struct acomp_alg alg = {
+ .compress = deflate_compress,
+ .decompress = deflate_decompress,

- ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
- if (!ctx)
- return ERR_PTR(-ENOMEM);
-
- ret = __deflate_init(ctx);
- if (ret) {
- kfree(ctx);
- return ERR_PTR(ret);
- }
-
- return ctx;
-}
-
-static void __deflate_exit(void *ctx)
-{
- deflate_comp_exit(ctx);
- deflate_decomp_exit(ctx);
-}
-
-static void deflate_free_ctx(struct crypto_scomp *tfm, void *ctx)
-{
- __deflate_exit(ctx);
- kfree_sensitive(ctx);
-}
-
-static int __deflate_compress(const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen, void *ctx)
-{
- int ret = 0;
- struct deflate_ctx *dctx = ctx;
- struct z_stream_s *stream = &dctx->comp_stream;
-
- ret = zlib_deflateReset(stream);
- if (ret != Z_OK) {
- ret = -EINVAL;
- goto out;
- }
-
- stream->next_in = (u8 *)src;
- stream->avail_in = slen;
- stream->next_out = (u8 *)dst;
- stream->avail_out = *dlen;
-
- ret = zlib_deflate(stream, Z_FINISH);
- if (ret != Z_STREAM_END) {
- ret = -EINVAL;
- goto out;
- }
- ret = 0;
- *dlen = stream->total_out;
-out:
- return ret;
-}
-
-static int deflate_scompress(struct crypto_scomp *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen,
- void *ctx)
-{
- return __deflate_compress(src, slen, dst, dlen, ctx);
-}
-
-static int __deflate_decompress(const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen, void *ctx)
-{
-
- int ret = 0;
- struct deflate_ctx *dctx = ctx;
- struct z_stream_s *stream = &dctx->decomp_stream;
-
- ret = zlib_inflateReset(stream);
- if (ret != Z_OK) {
- ret = -EINVAL;
- goto out;
- }
-
- stream->next_in = (u8 *)src;
- stream->avail_in = slen;
- stream->next_out = (u8 *)dst;
- stream->avail_out = *dlen;
-
- ret = zlib_inflate(stream, Z_SYNC_FLUSH);
- /*
- * Work around a bug in zlib, which sometimes wants to taste an extra
- * byte when being used in the (undocumented) raw deflate mode.
- * (From USAGI).
- */
- if (ret == Z_OK && !stream->avail_in && stream->avail_out) {
- u8 zerostuff = 0;
- stream->next_in = &zerostuff;
- stream->avail_in = 1;
- ret = zlib_inflate(stream, Z_FINISH);
- }
- if (ret != Z_STREAM_END) {
- ret = -EINVAL;
- goto out;
- }
- ret = 0;
- *dlen = stream->total_out;
-out:
- return ret;
-}
-
-static int deflate_sdecompress(struct crypto_scomp *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen,
- void *ctx)
-{
- return __deflate_decompress(src, slen, dst, dlen, ctx);
-}
-
-static struct scomp_alg scomp = {
- .alloc_ctx = deflate_alloc_ctx,
- .free_ctx = deflate_free_ctx,
- .compress = deflate_scompress,
- .decompress = deflate_sdecompress,
- .base = {
- .cra_name = "deflate",
- .cra_driver_name = "deflate-scomp",
- .cra_module = THIS_MODULE,
- }
+ .base.cra_name = "deflate",
+ .base.cra_driver_name = "deflate-generic",
+ .base.cra_module = THIS_MODULE,
};

static int __init deflate_mod_init(void)
{
- return crypto_register_scomp(&scomp);
+ size_t size = max(zlib_inflate_workspacesize(),
+ zlib_deflate_workspacesize(-DEFLATE_DEF_WINBITS,
+ DEFLATE_DEF_MEMLEVEL));
+
+ alg.reqsize = struct_size_t(struct deflate_req_ctx, workspace, size);
+ return crypto_register_acomp(&alg);
}

static void __exit deflate_mod_fini(void)
{
- crypto_unregister_scomp(&scomp);
+ crypto_unregister_acomp(&alg);
}

subsys_initcall(deflate_mod_init);
@@ -254,4 +162,5 @@ module_exit(deflate_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Deflate Compression Algorithm for IPCOMP");
MODULE_AUTHOR("James Morris <[email protected]>");
+MODULE_AUTHOR("Ard Biesheuvel <[email protected]>");
MODULE_ALIAS_CRYPTO("deflate");
diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h
index 32fc4473175b1d81..46dc7b21bf9ecbd0 100644
--- a/include/crypto/scatterwalk.h
+++ b/include/crypto/scatterwalk.h
@@ -51,7 +51,7 @@ static inline struct page *scatterwalk_page(struct scatter_walk *walk)
return sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT);
}

-static inline void scatterwalk_unmap(void *vaddr)
+static inline void scatterwalk_unmap(const void *vaddr)
{
kunmap_local(vaddr);
}
--
2.39.2


2023-07-18 13:06:18

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 19/21] crypto: remove obsolete 'comp' compression API

The 'comp' compression API has been superseded by the acomp API, which
is a bit more cumbersome to use, but ultimately more flexible when it
comes to hardware implementations.

Now that all the users and implementations have been removed, let's
remove the core plumbing of the 'comp' API as well.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
Documentation/crypto/architecture.rst | 2 -
crypto/Makefile | 2 +-
crypto/api.c | 4 -
crypto/compress.c | 32 -----
crypto/crypto_user_base.c | 16 ---
crypto/crypto_user_stat.c | 4 -
crypto/proc.c | 3 -
crypto/testmgr.c | 144 ++------------------
include/linux/crypto.h | 49 +------
9 files changed, 12 insertions(+), 244 deletions(-)

diff --git a/Documentation/crypto/architecture.rst b/Documentation/crypto/architecture.rst
index 646c3380a7edc4c6..ec7436aade15c2e6 100644
--- a/Documentation/crypto/architecture.rst
+++ b/Documentation/crypto/architecture.rst
@@ -196,8 +196,6 @@ the aforementioned cipher types:

- CRYPTO_ALG_TYPE_CIPHER Single block cipher

-- CRYPTO_ALG_TYPE_COMPRESS Compression
-
- CRYPTO_ALG_TYPE_AEAD Authenticated Encryption with Associated Data
(MAC)

diff --git a/crypto/Makefile b/crypto/Makefile
index 953a7e105e58c837..5775440c62e09eac 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -4,7 +4,7 @@
#

obj-$(CONFIG_CRYPTO) += crypto.o
-crypto-y := api.o cipher.o compress.o
+crypto-y := api.o cipher.o

obj-$(CONFIG_CRYPTO_ENGINE) += crypto_engine.o
obj-$(CONFIG_CRYPTO_FIPS) += fips.o
diff --git a/crypto/api.c b/crypto/api.c
index b9cc0c906efe0706..23d691a70bc3fb00 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -369,10 +369,6 @@ static unsigned int crypto_ctxsize(struct crypto_alg *alg, u32 type, u32 mask)
case CRYPTO_ALG_TYPE_CIPHER:
len += crypto_cipher_ctxsize(alg);
break;
-
- case CRYPTO_ALG_TYPE_COMPRESS:
- len += crypto_compress_ctxsize(alg);
- break;
}

return len;
diff --git a/crypto/compress.c b/crypto/compress.c
deleted file mode 100644
index 9048fe390c463069..0000000000000000
--- a/crypto/compress.c
+++ /dev/null
@@ -1,32 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-/*
- * Cryptographic API.
- *
- * Compression operations.
- *
- * Copyright (c) 2002 James Morris <[email protected]>
- */
-#include <linux/crypto.h>
-#include "internal.h"
-
-int crypto_comp_compress(struct crypto_comp *comp,
- const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen)
-{
- struct crypto_tfm *tfm = crypto_comp_tfm(comp);
-
- return tfm->__crt_alg->cra_compress.coa_compress(tfm, src, slen, dst,
- dlen);
-}
-EXPORT_SYMBOL_GPL(crypto_comp_compress);
-
-int crypto_comp_decompress(struct crypto_comp *comp,
- const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen)
-{
- struct crypto_tfm *tfm = crypto_comp_tfm(comp);
-
- return tfm->__crt_alg->cra_compress.coa_decompress(tfm, src, slen, dst,
- dlen);
-}
-EXPORT_SYMBOL_GPL(crypto_comp_decompress);
diff --git a/crypto/crypto_user_base.c b/crypto/crypto_user_base.c
index 3fa20f12989f7ef2..c27484b0042e6bd8 100644
--- a/crypto/crypto_user_base.c
+++ b/crypto/crypto_user_base.c
@@ -85,17 +85,6 @@ static int crypto_report_cipher(struct sk_buff *skb, struct crypto_alg *alg)
sizeof(rcipher), &rcipher);
}

-static int crypto_report_comp(struct sk_buff *skb, struct crypto_alg *alg)
-{
- struct crypto_report_comp rcomp;
-
- memset(&rcomp, 0, sizeof(rcomp));
-
- strscpy(rcomp.type, "compression", sizeof(rcomp.type));
-
- return nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS, sizeof(rcomp), &rcomp);
-}
-
static int crypto_report_one(struct crypto_alg *alg,
struct crypto_user_alg *ualg, struct sk_buff *skb)
{
@@ -136,11 +125,6 @@ static int crypto_report_one(struct crypto_alg *alg,
if (crypto_report_cipher(skb, alg))
goto nla_put_failure;

- break;
- case CRYPTO_ALG_TYPE_COMPRESS:
- if (crypto_report_comp(skb, alg))
- goto nla_put_failure;
-
break;
}

diff --git a/crypto/crypto_user_stat.c b/crypto/crypto_user_stat.c
index d4f3d39b51376973..d3133eda2f528d17 100644
--- a/crypto/crypto_user_stat.c
+++ b/crypto/crypto_user_stat.c
@@ -86,10 +86,6 @@ static int crypto_reportstat_one(struct crypto_alg *alg,
if (crypto_report_cipher(skb, alg))
goto nla_put_failure;
break;
- case CRYPTO_ALG_TYPE_COMPRESS:
- if (crypto_report_comp(skb, alg))
- goto nla_put_failure;
- break;
default:
pr_err("ERROR: Unhandled alg %d in %s\n",
alg->cra_flags & (CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_LARVAL),
diff --git a/crypto/proc.c b/crypto/proc.c
index 56c7c78df29713e3..88cc9830a9a907a8 100644
--- a/crypto/proc.c
+++ b/crypto/proc.c
@@ -75,9 +75,6 @@ static int c_show(struct seq_file *m, void *p)
seq_printf(m, "max keysize : %u\n",
alg->cra_cipher.cia_max_keysize);
break;
- case CRYPTO_ALG_TYPE_COMPRESS:
- seq_printf(m, "type : compression\n");
- break;
default:
seq_printf(m, "type : unknown\n");
break;
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index e4b6d67233763193..c476d1248c10d005 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -3295,112 +3295,6 @@ static int alg_test_skcipher(const struct alg_test_desc *desc,
return err;
}

-static int test_comp(struct crypto_comp *tfm,
- const struct comp_testvec *ctemplate,
- const struct comp_testvec *dtemplate,
- int ctcount, int dtcount)
-{
- const char *algo = crypto_tfm_alg_driver_name(crypto_comp_tfm(tfm));
- char *output, *decomp_output;
- unsigned int i;
- int ret;
-
- output = kmalloc(COMP_BUF_SIZE, GFP_KERNEL);
- if (!output)
- return -ENOMEM;
-
- decomp_output = kmalloc(COMP_BUF_SIZE, GFP_KERNEL);
- if (!decomp_output) {
- kfree(output);
- return -ENOMEM;
- }
-
- for (i = 0; i < ctcount; i++) {
- int ilen;
- unsigned int dlen = COMP_BUF_SIZE;
-
- memset(output, 0, COMP_BUF_SIZE);
- memset(decomp_output, 0, COMP_BUF_SIZE);
-
- ilen = ctemplate[i].inlen;
- ret = crypto_comp_compress(tfm, ctemplate[i].input,
- ilen, output, &dlen);
- if (ret) {
- printk(KERN_ERR "alg: comp: compression failed "
- "on test %d for %s: ret=%d\n", i + 1, algo,
- -ret);
- goto out;
- }
-
- ilen = dlen;
- dlen = COMP_BUF_SIZE;
- ret = crypto_comp_decompress(tfm, output,
- ilen, decomp_output, &dlen);
- if (ret) {
- pr_err("alg: comp: compression failed: decompress: on test %d for %s failed: ret=%d\n",
- i + 1, algo, -ret);
- goto out;
- }
-
- if (dlen != ctemplate[i].inlen) {
- printk(KERN_ERR "alg: comp: Compression test %d "
- "failed for %s: output len = %d\n", i + 1, algo,
- dlen);
- ret = -EINVAL;
- goto out;
- }
-
- if (memcmp(decomp_output, ctemplate[i].input,
- ctemplate[i].inlen)) {
- pr_err("alg: comp: compression failed: output differs: on test %d for %s\n",
- i + 1, algo);
- hexdump(decomp_output, dlen);
- ret = -EINVAL;
- goto out;
- }
- }
-
- for (i = 0; i < dtcount; i++) {
- int ilen;
- unsigned int dlen = COMP_BUF_SIZE;
-
- memset(decomp_output, 0, COMP_BUF_SIZE);
-
- ilen = dtemplate[i].inlen;
- ret = crypto_comp_decompress(tfm, dtemplate[i].input,
- ilen, decomp_output, &dlen);
- if (ret) {
- printk(KERN_ERR "alg: comp: decompression failed "
- "on test %d for %s: ret=%d\n", i + 1, algo,
- -ret);
- goto out;
- }
-
- if (dlen != dtemplate[i].outlen) {
- printk(KERN_ERR "alg: comp: Decompression test %d "
- "failed for %s: output len = %d\n", i + 1, algo,
- dlen);
- ret = -EINVAL;
- goto out;
- }
-
- if (memcmp(decomp_output, dtemplate[i].output, dlen)) {
- printk(KERN_ERR "alg: comp: Decompression test %d "
- "failed for %s\n", i + 1, algo);
- hexdump(decomp_output, dlen);
- ret = -EINVAL;
- goto out;
- }
- }
-
- ret = 0;
-
-out:
- kfree(decomp_output);
- kfree(output);
- return ret;
-}
-
static int test_acomp(struct crypto_acomp *tfm,
const struct comp_testvec *ctemplate,
const struct comp_testvec *dtemplate,
@@ -3657,38 +3551,20 @@ static int alg_test_cipher(const struct alg_test_desc *desc,
static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
u32 type, u32 mask)
{
- struct crypto_comp *comp;
struct crypto_acomp *acomp;
int err;
- u32 algo_type = type & CRYPTO_ALG_TYPE_ACOMPRESS_MASK;

- if (algo_type == CRYPTO_ALG_TYPE_ACOMPRESS) {
- acomp = crypto_alloc_acomp(driver, type, mask);
- if (IS_ERR(acomp)) {
- pr_err("alg: acomp: Failed to load transform for %s: %ld\n",
- driver, PTR_ERR(acomp));
- return PTR_ERR(acomp);
- }
- err = test_acomp(acomp, desc->suite.comp.comp.vecs,
- desc->suite.comp.decomp.vecs,
- desc->suite.comp.comp.count,
- desc->suite.comp.decomp.count);
- crypto_free_acomp(acomp);
- } else {
- comp = crypto_alloc_comp(driver, type, mask);
- if (IS_ERR(comp)) {
- pr_err("alg: comp: Failed to load transform for %s: %ld\n",
- driver, PTR_ERR(comp));
- return PTR_ERR(comp);
- }
-
- err = test_comp(comp, desc->suite.comp.comp.vecs,
- desc->suite.comp.decomp.vecs,
- desc->suite.comp.comp.count,
- desc->suite.comp.decomp.count);
-
- crypto_free_comp(comp);
+ acomp = crypto_alloc_acomp(driver, type, mask);
+ if (IS_ERR(acomp)) {
+ pr_err("alg: acomp: Failed to load transform for %s: %ld\n",
+ driver, PTR_ERR(acomp));
+ return PTR_ERR(acomp);
}
+ err = test_acomp(acomp, desc->suite.comp.comp.vecs,
+ desc->suite.comp.decomp.vecs,
+ desc->suite.comp.comp.count,
+ desc->suite.comp.decomp.count);
+ crypto_free_acomp(acomp);
return err;
}

diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 31f6fee0c36c6448..b40debb7d32d99fd 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -22,7 +22,7 @@
*/
#define CRYPTO_ALG_TYPE_MASK 0x0000000f
#define CRYPTO_ALG_TYPE_CIPHER 0x00000001
-#define CRYPTO_ALG_TYPE_COMPRESS 0x00000002
+//#define CRYPTO_ALG_TYPE_COMPRESS 0x00000002
#define CRYPTO_ALG_TYPE_AEAD 0x00000003
#define CRYPTO_ALG_TYPE_SKCIPHER 0x00000005
#define CRYPTO_ALG_TYPE_AKCIPHER 0x00000006
@@ -493,52 +493,5 @@ static inline unsigned int crypto_tfm_ctx_alignment(void)
return __alignof__(tfm->__crt_ctx);
}

-static inline struct crypto_comp *__crypto_comp_cast(struct crypto_tfm *tfm)
-{
- return (struct crypto_comp *)tfm;
-}
-
-static inline struct crypto_comp *crypto_alloc_comp(const char *alg_name,
- u32 type, u32 mask)
-{
- type &= ~CRYPTO_ALG_TYPE_MASK;
- type |= CRYPTO_ALG_TYPE_COMPRESS;
- mask |= CRYPTO_ALG_TYPE_MASK;
-
- return __crypto_comp_cast(crypto_alloc_base(alg_name, type, mask));
-}
-
-static inline struct crypto_tfm *crypto_comp_tfm(struct crypto_comp *tfm)
-{
- return &tfm->base;
-}
-
-static inline void crypto_free_comp(struct crypto_comp *tfm)
-{
- crypto_free_tfm(crypto_comp_tfm(tfm));
-}
-
-static inline int crypto_has_comp(const char *alg_name, u32 type, u32 mask)
-{
- type &= ~CRYPTO_ALG_TYPE_MASK;
- type |= CRYPTO_ALG_TYPE_COMPRESS;
- mask |= CRYPTO_ALG_TYPE_MASK;
-
- return crypto_has_alg(alg_name, type, mask);
-}
-
-static inline const char *crypto_comp_name(struct crypto_comp *tfm)
-{
- return crypto_tfm_alg_name(crypto_comp_tfm(tfm));
-}
-
-int crypto_comp_compress(struct crypto_comp *tfm,
- const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen);
-
-int crypto_comp_decompress(struct crypto_comp *tfm,
- const u8 *src, unsigned int slen,
- u8 *dst, unsigned int *dlen);
-
#endif /* _LINUX_CRYPTO_H */

--
2.39.2


2023-07-18 13:06:59

by Ard Biesheuvel

[permalink] [raw]
Subject: [RFC PATCH 21/21] crypto: scompress - Drop the use of per-cpu scratch buffers

The scomp to acomp adaptation layer allocates 256k of scratch buffers
per CPU in order to be able to present the input provided by the caller
via scatterlists as linear byte arrays to the underlying synchronous
compression drivers, most of which are thin wrappers around the various
compression algorithm library implementations we have in the kernel.

This sucks. With high core counts and SMT, this easily adds up to
multiple megabytes that are permanently tied up for this purpose, and
given that all acomp users pass either single pages or contiguous
buffers in lowmem, we can optimize for this pattern and just pass the
buffer directly if we can. This removes the need for scratch buffers,
and along with it, the arbitrary 128k upper bound on the input and
output size of the acomp API when the implementation happens to be scomp
based.

So add a scomp_map_sg() helper to try and obtain the virtual addresses
associated with the scatterlists, which is guaranteed to be successful
100% of the time given the existing users, which all fit the prerequisite
pattern. And as a fallback for other cases, use kvmalloc with GFP_KERNEL
to allocate buffers on the fly and free them again right after.

This puts the burden on future callers to either use a contiguous
buffer, or deal with the potentially blocking nature of GFP_KERNEL.
For IPcomp in particular, the only relevant compression algorithm is
'deflate' which is no longer implemented as an scomp, and so this change
will not affect it even if we decide to convert it to take advantage of
the ability to pass discontiguous scatterlists.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
crypto/scompress.c | 159 ++++++++++----------
include/crypto/internal/scompress.h | 2 -
2 files changed, 76 insertions(+), 85 deletions(-)

diff --git a/crypto/scompress.c b/crypto/scompress.c
index 3155cdce9116e092..1c050aa864bd604d 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -18,24 +18,11 @@
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/string.h>
-#include <linux/vmalloc.h>
#include <net/netlink.h>

#include "compress.h"

-struct scomp_scratch {
- spinlock_t lock;
- void *src;
- void *dst;
-};
-
-static DEFINE_PER_CPU(struct scomp_scratch, scomp_scratch) = {
- .lock = __SPIN_LOCK_UNLOCKED(scomp_scratch.lock),
-};
-
static const struct crypto_type crypto_scomp_type;
-static int scomp_scratch_users;
-static DEFINE_MUTEX(scomp_lock);

static int __maybe_unused crypto_scomp_report(
struct sk_buff *skb, struct crypto_alg *alg)
@@ -58,56 +45,45 @@ static void crypto_scomp_show(struct seq_file *m, struct crypto_alg *alg)
seq_puts(m, "type : scomp\n");
}

-static void crypto_scomp_free_scratches(void)
-{
- struct scomp_scratch *scratch;
- int i;
-
- for_each_possible_cpu(i) {
- scratch = per_cpu_ptr(&scomp_scratch, i);
-
- vfree(scratch->src);
- vfree(scratch->dst);
- scratch->src = NULL;
- scratch->dst = NULL;
- }
-}
-
-static int crypto_scomp_alloc_scratches(void)
-{
- struct scomp_scratch *scratch;
- int i;
-
- for_each_possible_cpu(i) {
- void *mem;
-
- scratch = per_cpu_ptr(&scomp_scratch, i);
-
- mem = vmalloc_node(SCOMP_SCRATCH_SIZE, cpu_to_node(i));
- if (!mem)
- goto error;
- scratch->src = mem;
- mem = vmalloc_node(SCOMP_SCRATCH_SIZE, cpu_to_node(i));
- if (!mem)
- goto error;
- scratch->dst = mem;
- }
- return 0;
-error:
- crypto_scomp_free_scratches();
- return -ENOMEM;
-}
-
static int crypto_scomp_init_tfm(struct crypto_tfm *tfm)
{
- int ret = 0;
+ return 0;
+}

- mutex_lock(&scomp_lock);
- if (!scomp_scratch_users++)
- ret = crypto_scomp_alloc_scratches();
- mutex_unlock(&scomp_lock);
+/**
+ * scomp_map_sg - Return virtual address of memory described by a scatterlist
+ *
+ * @sg: The address of the scatterlist in memory
+ * @len: The length of the buffer described by the scatterlist
+ *
+ * If the memory region described by scatterlist @sg consists of @len
+ * contiguous bytes in memory and is accessible via the linear mapping or via a
+ * single kmap(), return its virtual address. Otherwise, return NULL.
+ */
+static void *scomp_map_sg(struct scatterlist *sg, unsigned int len)
+{
+ struct page *page;
+ unsigned int offset;

- return ret;
+ while (sg_is_chain(sg))
+ sg = sg_next(sg);
+
+ if (!sg || sg_nents_for_len(sg, len) != 1)
+ return NULL;
+
+ page = sg_page(sg) + (sg->offset >> PAGE_SHIFT);
+ offset = offset_in_page(sg->offset);
+
+ if (PageHighMem(page) && (offset + sg->length) > PAGE_SIZE)
+ return NULL;
+
+ return kmap_local_page(page) + offset;
+}
+
+static void scomp_unmap_sg(const void *addr)
+{
+ if (is_kmap_addr(addr))
+ kunmap_local(addr);
}

static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
@@ -116,30 +92,52 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
void **tfm_ctx = acomp_tfm_ctx(tfm);
struct crypto_scomp *scomp = *tfm_ctx;
void **ctx = acomp_request_ctx(req);
- struct scomp_scratch *scratch;
+ void *src_alloc = NULL;
+ void *dst_alloc = NULL;
+ const u8 *src;
+ u8 *dst;
int ret;

- if (!req->src || !req->slen || req->slen > SCOMP_SCRATCH_SIZE)
+ if (!req->src || !req->slen || !req->dst || !req->dlen)
return -EINVAL;

- if (!req->dst || !req->dlen || req->dlen > SCOMP_SCRATCH_SIZE)
- return -EINVAL;
-
- scratch = raw_cpu_ptr(&scomp_scratch);
- spin_lock(&scratch->lock);
-
- scatterwalk_map_and_copy(scratch->src, req->src, 0, req->slen, 0);
- if (dir)
- ret = crypto_scomp_compress(scomp, scratch->src, req->slen,
- scratch->dst, &req->dlen, *ctx);
- else
- ret = crypto_scomp_decompress(scomp, scratch->src, req->slen,
- scratch->dst, &req->dlen, *ctx);
- if (!ret) {
- scatterwalk_map_and_copy(scratch->dst, req->dst, 0, req->dlen,
- 1);
+ dst = scomp_map_sg(req->dst, req->dlen);
+ if (!dst) {
+ dst = dst_alloc = kvmalloc(req->dlen, GFP_KERNEL);
+ if (!dst_alloc)
+ return -ENOMEM;
}
- spin_unlock(&scratch->lock);
+
+ src = scomp_map_sg(req->src, req->slen);
+ if (!src) {
+ src = src_alloc = kvmalloc(req->slen, GFP_KERNEL);
+ if (!src_alloc) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ scatterwalk_map_and_copy(src_alloc, req->src, 0, req->slen, 0);
+ }
+
+ if (dir)
+ ret = crypto_scomp_compress(scomp, src, req->slen, dst,
+ &req->dlen, *ctx);
+ else
+ ret = crypto_scomp_decompress(scomp, src, req->slen, dst,
+ &req->dlen, *ctx);
+
+ if (src_alloc)
+ kvfree(src_alloc);
+ else
+ scomp_unmap_sg(src);
+
+ if (!ret && dst == dst_alloc)
+ scatterwalk_map_and_copy(dst, req->dst, 0, req->dlen, 1);
+out:
+ if (dst_alloc)
+ kvfree(dst_alloc);
+ else
+ scomp_unmap_sg(dst);
+
return ret;
}

@@ -158,11 +156,6 @@ static void crypto_exit_scomp_ops_async(struct crypto_tfm *tfm)
struct crypto_scomp **ctx = crypto_tfm_ctx(tfm);

crypto_free_scomp(*ctx);
-
- mutex_lock(&scomp_lock);
- if (!--scomp_scratch_users)
- crypto_scomp_free_scratches();
- mutex_unlock(&scomp_lock);
}

int crypto_init_scomp_ops_async(struct crypto_tfm *tfm)
diff --git a/include/crypto/internal/scompress.h b/include/crypto/internal/scompress.h
index 858fe3965ae347ef..69e593d72cbdaa99 100644
--- a/include/crypto/internal/scompress.h
+++ b/include/crypto/internal/scompress.h
@@ -12,8 +12,6 @@
#include <crypto/acompress.h>
#include <crypto/algapi.h>

-#define SCOMP_SCRATCH_SIZE 131072
-
struct acomp_req;

struct crypto_scomp {
--
2.39.2


2023-07-21 11:18:46

by Simon Horman

[permalink] [raw]
Subject: Re: [RFC PATCH 19/21] crypto: remove obsolete 'comp' compression API

On Tue, Jul 18, 2023 at 02:58:45PM +0200, Ard Biesheuvel wrote:

...

> diff --git a/crypto/crypto_user_stat.c b/crypto/crypto_user_stat.c
> index d4f3d39b51376973..d3133eda2f528d17 100644
> --- a/crypto/crypto_user_stat.c
> +++ b/crypto/crypto_user_stat.c
> @@ -86,10 +86,6 @@ static int crypto_reportstat_one(struct crypto_alg *alg,
> if (crypto_report_cipher(skb, alg))
> goto nla_put_failure;
> break;
> - case CRYPTO_ALG_TYPE_COMPRESS:
> - if (crypto_report_comp(skb, alg))
> - goto nla_put_failure;
> - break;

Hi Ard,

It seems that there is an implementation of crypto_report_comp() in this file
which is now unused and can be removed.

> default:
> pr_err("ERROR: Unhandled alg %d in %s\n",
> alg->cra_flags & (CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_LARVAL),

...

2023-07-21 11:22:52

by Simon Horman

[permalink] [raw]
Subject: Re: [RFC PATCH 20/21] crypto: deflate - implement acomp API directly

On Tue, Jul 18, 2023 at 02:58:46PM +0200, Ard Biesheuvel wrote:

...

> -static int deflate_comp_init(struct deflate_ctx *ctx)
> +static int deflate_process(struct acomp_req *req, struct z_stream_s *stream,
> + int (*process)(struct z_stream_s *, int))
> {
> - int ret = 0;
> - struct z_stream_s *stream = &ctx->comp_stream;
> + unsigned int slen = req->slen;
> + unsigned int dlen = req->dlen;
> + struct scatter_walk src, dst;
> + unsigned int scur, dcur;
> + int ret;
>
> - stream->workspace = vzalloc(zlib_deflate_workspacesize(
> - -DEFLATE_DEF_WINBITS, DEFLATE_DEF_MEMLEVEL));
> - if (!stream->workspace) {
> - ret = -ENOMEM;
> - goto out;
> - }
> + stream->avail_in = stream->avail_out = 0;
> +
> + scatterwalk_start(&src, req->src);
> + scatterwalk_start(&dst, req->dst);
> +
> + scur = dcur = 0;
> +
> + do {
> + if (stream->avail_in == 0) {
> + if (scur) {
> + slen -= scur;
> +
> + scatterwalk_unmap(stream->next_in - scur);
> + scatterwalk_advance(&src, scur);
> + scatterwalk_done(&src, 0, slen);
> + }
> +
> + scur = scatterwalk_clamp(&src, slen);
> + if (scur) {
> + stream->next_in = scatterwalk_map(&src);
> + stream->avail_in = scur;
> + }
> + }
> +
> + if (stream->avail_out == 0) {
> + if (dcur) {
> + dlen -= dcur;
> +
> + scatterwalk_unmap(stream->next_out - dcur);
> + scatterwalk_advance(&dst, dcur);
> + scatterwalk_done(&dst, 1, dlen);
> + }
> +
> + dcur = scatterwalk_clamp(&dst, dlen);
> + if (!dcur)
> + break;

Hi Ard,

I'm unsure if this can happen. But if this break occurs in the first
iteration of this do loop, then ret will be used uninitialised below.

Smatch noticed this.

> +
> + stream->next_out = scatterwalk_map(&dst);
> + stream->avail_out = dcur;
> + }
> +
> + ret = process(stream, (slen == scur) ? Z_FINISH : Z_SYNC_FLUSH);
> + } while (ret == Z_OK);
> +
> + if (scur)
> + scatterwalk_unmap(stream->next_in - scur);
> + if (dcur)
> + scatterwalk_unmap(stream->next_out - dcur);
> +
> + if (ret != Z_STREAM_END)
> + return -EINVAL;
> +
> + req->dlen = stream->total_out;
> + return 0;
> +}

...

2023-07-21 11:25:25

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [RFC PATCH 20/21] crypto: deflate - implement acomp API directly

On Fri, 21 Jul 2023 at 13:12, Simon Horman <[email protected]> wrote:
>
> On Tue, Jul 18, 2023 at 02:58:46PM +0200, Ard Biesheuvel wrote:
>
> ...
>
> > -static int deflate_comp_init(struct deflate_ctx *ctx)
> > +static int deflate_process(struct acomp_req *req, struct z_stream_s *stream,
> > + int (*process)(struct z_stream_s *, int))
> > {
> > - int ret = 0;
> > - struct z_stream_s *stream = &ctx->comp_stream;
> > + unsigned int slen = req->slen;
> > + unsigned int dlen = req->dlen;
> > + struct scatter_walk src, dst;
> > + unsigned int scur, dcur;
> > + int ret;
> >
> > - stream->workspace = vzalloc(zlib_deflate_workspacesize(
> > - -DEFLATE_DEF_WINBITS, DEFLATE_DEF_MEMLEVEL));
> > - if (!stream->workspace) {
> > - ret = -ENOMEM;
> > - goto out;
> > - }
> > + stream->avail_in = stream->avail_out = 0;
> > +
> > + scatterwalk_start(&src, req->src);
> > + scatterwalk_start(&dst, req->dst);
> > +
> > + scur = dcur = 0;
> > +
> > + do {
> > + if (stream->avail_in == 0) {
> > + if (scur) {
> > + slen -= scur;
> > +
> > + scatterwalk_unmap(stream->next_in - scur);
> > + scatterwalk_advance(&src, scur);
> > + scatterwalk_done(&src, 0, slen);
> > + }
> > +
> > + scur = scatterwalk_clamp(&src, slen);
> > + if (scur) {
> > + stream->next_in = scatterwalk_map(&src);
> > + stream->avail_in = scur;
> > + }
> > + }
> > +
> > + if (stream->avail_out == 0) {
> > + if (dcur) {
> > + dlen -= dcur;
> > +
> > + scatterwalk_unmap(stream->next_out - dcur);
> > + scatterwalk_advance(&dst, dcur);
> > + scatterwalk_done(&dst, 1, dlen);
> > + }
> > +
> > + dcur = scatterwalk_clamp(&dst, dlen);
> > + if (!dcur)
> > + break;
>
> Hi Ard,
>
> I'm unsure if this can happen. But if this break occurs in the first
> iteration of this do loop, then ret will be used uninitialised below.
>
> Smatch noticed this.
>

Thanks.

This should not happen - it would mean req->dlen == 0, which is
rejected before this function is even called.

Whether or not it might ever happen in practice is a different matter,
of course, so I should probably initialize 'ret' to something sane.



> > +
> > + stream->next_out = scatterwalk_map(&dst);
> > + stream->avail_out = dcur;
> > + }
> > +
> > + ret = process(stream, (slen == scur) ? Z_FINISH : Z_SYNC_FLUSH);
> > + } while (ret == Z_OK);
> > +
> > + if (scur)
> > + scatterwalk_unmap(stream->next_in - scur);
> > + if (dcur)
> > + scatterwalk_unmap(stream->next_out - dcur);
> > +
> > + if (ret != Z_STREAM_END)
> > + return -EINVAL;
> > +
> > + req->dlen = stream->total_out;
> > + return 0;
> > +}
>
> ...

2023-07-28 10:09:40

by Herbert Xu

[permalink] [raw]
Subject: Re: [RFC PATCH 00/21] crypto: consolidate and clean up compression APIs

On Tue, Jul 18, 2023 at 02:58:26PM +0200, Ard Biesheuvel wrote:
>
> Patch #2 removes the support for on-the-fly allocation of destination
> buffers and scatterlists from the Intel QAT driver. This is never used,
> and not even implemented by all drivers (the HiSilicon ZIP driver does
> not support it). The diffstat of this patch makes a good case why the
> caller should be in charge of allocating the memory, not the driver.

The implementation in qat may not be optimal, but being able to
allocate memory in the algorithm is a big plus for IPComp at least.

Being able to allocate memory page by page as you decompress
means that:

1. We're not affected by memory fragmentation.
2. We don't waste memory by always allocating for the worst case.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-07-28 10:10:40

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [RFC PATCH 00/21] crypto: consolidate and clean up compression APIs

On Fri, 28 Jul 2023 at 11:56, Herbert Xu <[email protected]> wrote:
>
> On Tue, Jul 18, 2023 at 02:58:26PM +0200, Ard Biesheuvel wrote:
> >
> > Patch #2 removes the support for on-the-fly allocation of destination
> > buffers and scatterlists from the Intel QAT driver. This is never used,
> > and not even implemented by all drivers (the HiSilicon ZIP driver does
> > not support it). The diffstat of this patch makes a good case why the
> > caller should be in charge of allocating the memory, not the driver.
>
> The implementation in qat may not be optimal, but being able to
> allocate memory in the algorithm is a big plus for IPComp at least.
>
> Being able to allocate memory page by page as you decompress
> means that:
>
> 1. We're not affected by memory fragmentation.
> 2. We don't waste memory by always allocating for the worst case.
>

So will IPcomp be able to simply assign those pages to the SKB afterwards?

2023-07-28 10:11:15

by Herbert Xu

[permalink] [raw]
Subject: Re: [RFC PATCH 00/21] crypto: consolidate and clean up compression APIs

On Fri, Jul 28, 2023 at 11:57:42AM +0200, Ard Biesheuvel wrote:
>
> So will IPcomp be able to simply assign those pages to the SKB afterwards?

Yes that is the idea. The network stack is very much in love with
SG lists :)

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-07-28 10:17:51

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [RFC PATCH 00/21] crypto: consolidate and clean up compression APIs

On Fri, 28 Jul 2023 at 11:59, Herbert Xu <[email protected]> wrote:
>
> On Fri, Jul 28, 2023 at 11:57:42AM +0200, Ard Biesheuvel wrote:
> >
> > So will IPcomp be able to simply assign those pages to the SKB afterwards?
>
> Yes that is the idea. The network stack is very much in love with
> SG lists :)
>

Fair enough. But my point remains: this requires a lot of boilerplate
on the part of the driver, and it would be better if we could do this
in the acomp generic layer.

Does the IPcomp case always know the decompressed size upfront?

2023-07-28 10:26:37

by Herbert Xu

[permalink] [raw]
Subject: Re: [RFC PATCH 00/21] crypto: consolidate and clean up compression APIs

On Fri, Jul 28, 2023 at 12:03:23PM +0200, Ard Biesheuvel wrote:
>
> Fair enough. But my point remains: this requires a lot of boilerplate
> on the part of the driver, and it would be better if we could do this
> in the acomp generic layer.

Absolutely. If the hardware can't support allocate-as-you-go then
this should very much go into the generic layer.

> Does the IPcomp case always know the decompressed size upfront?

No it doesn't know. Of course, we could optimise it because we know
that in 99% cases, the packet is going to be less than 4K. But we
need a safety-net for those weird jumbo packets.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt