2016-01-26 08:15:25

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 00/10] Introduce new async/sync compression APIs

This patchset introduces new compression APIs. This work is started
to support crypto compression in zram [1]. I will restart that work
after this patchset is mereged. Major point of new APIs is that it is
now stateless. Instead of legacy compression API, tfm objects doesn't
embedded any context so we can de/compress concurrently with one tfm
object. Instead, this de/compression context is coupled with the request.
This architecture change will make APIs more flexible and we can naturally
use asynchronous APIs as front-end of synchronous compression algorithm.

Moreover, thanks to this change, we can decompress without context buffer
if algorithm supports it. In this case, we can achieve maximum parallelism
without memory overhead caused by context buffer.

Please let know if there is a problem.

Thanks.

[1]: https://lkml.org/lkml/2015/10/14/83

Joonsoo Kim (9):
crypto/compress: remove unused pcomp interface
crypto: add algorithm type specific flag, CRYPTO_ALG_PRIVATE
crypto/compress: introduce sychronuous compression API
crypto/lzo: support new compression APIs
crypto/lz4: support new compression APIs
crypto/lz4hc: support new compression APIs
crypto/842: support new compression APIs
crypto/deflate: support new compression APIs
crypto/testmgr: add new compression APIs test

Weigang Li (1):
crypto/compress: add asynchronous compression support

crypto/842.c | 85 +++++++-
crypto/Kconfig | 23 +-
crypto/Makefile | 4 +-
crypto/acompress.c | 164 ++++++++++++++
crypto/deflate.c | 110 +++++++++-
crypto/lz4.c | 91 +++++++-
crypto/lz4hc.c | 91 +++++++-
crypto/lzo.c | 95 ++++++--
crypto/pcompress.c | 115 ----------
crypto/scompress.c | 284 ++++++++++++++++++++++++
crypto/testmgr.c | 428 ++++++++++++++++++-------------------
crypto/testmgr.h | 144 -------------
crypto/zlib.c | 381 ---------------------------------
include/crypto/compress.h | 375 ++++++++++++++++++++++++--------
include/crypto/internal/compress.h | 32 +--
include/linux/crypto.h | 9 +-
16 files changed, 1387 insertions(+), 1044 deletions(-)
create mode 100644 crypto/acompress.c
delete mode 100644 crypto/pcompress.c
create mode 100644 crypto/scompress.c
delete mode 100644 crypto/zlib.c

--
1.9.1


2016-01-26 08:15:03

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 01/10] crypto/compress: remove unused pcomp interface

It is unused now, so remove it.

Signed-off-by: Joonsoo Kim <[email protected]>
---
crypto/Kconfig | 19 --
crypto/Makefile | 2 -
crypto/pcompress.c | 115 -----------
crypto/testmgr.c | 223 ----------------------
crypto/testmgr.h | 144 --------------
crypto/zlib.c | 381 -------------------------------------
include/crypto/compress.h | 145 --------------
include/crypto/internal/compress.h | 28 ---
include/linux/crypto.h | 1 -
9 files changed, 1058 deletions(-)
delete mode 100644 crypto/pcompress.c
delete mode 100644 crypto/zlib.c
delete mode 100644 include/crypto/compress.h
delete mode 100644 include/crypto/internal/compress.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 7240821..c80d34f 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -84,15 +84,6 @@ config CRYPTO_RNG_DEFAULT
tristate
select CRYPTO_DRBG_MENU

-config CRYPTO_PCOMP
- tristate
- select CRYPTO_PCOMP2
- select CRYPTO_ALGAPI
-
-config CRYPTO_PCOMP2
- tristate
- select CRYPTO_ALGAPI2
-
config CRYPTO_AKCIPHER2
tristate
select CRYPTO_ALGAPI2
@@ -122,7 +113,6 @@ config CRYPTO_MANAGER2
select CRYPTO_AEAD2
select CRYPTO_HASH2
select CRYPTO_BLKCIPHER2
- select CRYPTO_PCOMP2
select CRYPTO_AKCIPHER2

config CRYPTO_USER
@@ -1504,15 +1494,6 @@ config CRYPTO_DEFLATE

You will most probably want this if using IPSec.

-config CRYPTO_ZLIB
- tristate "Zlib compression algorithm"
- select CRYPTO_PCOMP
- select ZLIB_INFLATE
- select ZLIB_DEFLATE
- select NLATTR
- help
- This is the zlib algorithm.
-
config CRYPTO_LZO
tristate "LZO compression algorithm"
select CRYPTO_ALGAPI
diff --git a/crypto/Makefile b/crypto/Makefile
index 2acdbbd..ffe18c9 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -28,7 +28,6 @@ crypto_hash-y += ahash.o
crypto_hash-y += shash.o
obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o

-obj-$(CONFIG_CRYPTO_PCOMP2) += pcompress.o
obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o

$(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
@@ -99,7 +98,6 @@ obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o
obj-$(CONFIG_CRYPTO_CHACHA20) += chacha20_generic.o
obj-$(CONFIG_CRYPTO_POLY1305) += poly1305_generic.o
obj-$(CONFIG_CRYPTO_DEFLATE) += deflate.o
-obj-$(CONFIG_CRYPTO_ZLIB) += zlib.o
obj-$(CONFIG_CRYPTO_MICHAEL_MIC) += michael_mic.o
obj-$(CONFIG_CRYPTO_CRC32C) += crc32c_generic.o
obj-$(CONFIG_CRYPTO_CRC32) += crc32.o
diff --git a/crypto/pcompress.c b/crypto/pcompress.c
deleted file mode 100644
index 7a13b40..0000000
--- a/crypto/pcompress.c
+++ /dev/null
@@ -1,115 +0,0 @@
-/*
- * Cryptographic API.
- *
- * Partial (de)compression operations.
- *
- * Copyright 2008 Sony Corporation
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; version 2 of the License.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.
- * If not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <linux/crypto.h>
-#include <linux/errno.h>
-#include <linux/module.h>
-#include <linux/seq_file.h>
-#include <linux/string.h>
-#include <linux/cryptouser.h>
-#include <net/netlink.h>
-
-#include <crypto/compress.h>
-#include <crypto/internal/compress.h>
-
-#include "internal.h"
-
-
-static int crypto_pcomp_init(struct crypto_tfm *tfm, u32 type, u32 mask)
-{
- return 0;
-}
-
-static int crypto_pcomp_init_tfm(struct crypto_tfm *tfm)
-{
- return 0;
-}
-
-#ifdef CONFIG_NET
-static int crypto_pcomp_report(struct sk_buff *skb, struct crypto_alg *alg)
-{
- struct crypto_report_comp rpcomp;
-
- strncpy(rpcomp.type, "pcomp", sizeof(rpcomp.type));
- if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS,
- sizeof(struct crypto_report_comp), &rpcomp))
- goto nla_put_failure;
- return 0;
-
-nla_put_failure:
- return -EMSGSIZE;
-}
-#else
-static int crypto_pcomp_report(struct sk_buff *skb, struct crypto_alg *alg)
-{
- return -ENOSYS;
-}
-#endif
-
-static void crypto_pcomp_show(struct seq_file *m, struct crypto_alg *alg)
- __attribute__ ((unused));
-static void crypto_pcomp_show(struct seq_file *m, struct crypto_alg *alg)
-{
- seq_printf(m, "type : pcomp\n");
-}
-
-static const struct crypto_type crypto_pcomp_type = {
- .extsize = crypto_alg_extsize,
- .init = crypto_pcomp_init,
- .init_tfm = crypto_pcomp_init_tfm,
-#ifdef CONFIG_PROC_FS
- .show = crypto_pcomp_show,
-#endif
- .report = crypto_pcomp_report,
- .maskclear = ~CRYPTO_ALG_TYPE_MASK,
- .maskset = CRYPTO_ALG_TYPE_MASK,
- .type = CRYPTO_ALG_TYPE_PCOMPRESS,
- .tfmsize = offsetof(struct crypto_pcomp, base),
-};
-
-struct crypto_pcomp *crypto_alloc_pcomp(const char *alg_name, u32 type,
- u32 mask)
-{
- return crypto_alloc_tfm(alg_name, &crypto_pcomp_type, type, mask);
-}
-EXPORT_SYMBOL_GPL(crypto_alloc_pcomp);
-
-int crypto_register_pcomp(struct pcomp_alg *alg)
-{
- struct crypto_alg *base = &alg->base;
-
- base->cra_type = &crypto_pcomp_type;
- base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
- base->cra_flags |= CRYPTO_ALG_TYPE_PCOMPRESS;
-
- return crypto_register_alg(base);
-}
-EXPORT_SYMBOL_GPL(crypto_register_pcomp);
-
-int crypto_unregister_pcomp(struct pcomp_alg *alg)
-{
- return crypto_unregister_alg(&alg->base);
-}
-EXPORT_SYMBOL_GPL(crypto_unregister_pcomp);
-
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("Partial (de)compression type");
-MODULE_AUTHOR("Sony Corporation");
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index ae8c57fd..086aa4d 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -96,13 +96,6 @@ struct comp_test_suite {
} comp, decomp;
};

-struct pcomp_test_suite {
- struct {
- struct pcomp_testvec *vecs;
- unsigned int count;
- } comp, decomp;
-};
-
struct hash_test_suite {
struct hash_testvec *vecs;
unsigned int count;
@@ -133,7 +126,6 @@ struct alg_test_desc {
struct aead_test_suite aead;
struct cipher_test_suite cipher;
struct comp_test_suite comp;
- struct pcomp_test_suite pcomp;
struct hash_test_suite hash;
struct cprng_test_suite cprng;
struct drbg_test_suite drbg;
@@ -1293,183 +1285,6 @@ out:
return ret;
}

-static int test_pcomp(struct crypto_pcomp *tfm,
- struct pcomp_testvec *ctemplate,
- struct pcomp_testvec *dtemplate, int ctcount,
- int dtcount)
-{
- const char *algo = crypto_tfm_alg_driver_name(crypto_pcomp_tfm(tfm));
- unsigned int i;
- char result[COMP_BUF_SIZE];
- int res;
-
- for (i = 0; i < ctcount; i++) {
- struct comp_request req;
- unsigned int produced = 0;
-
- res = crypto_compress_setup(tfm, ctemplate[i].params,
- ctemplate[i].paramsize);
- if (res) {
- pr_err("alg: pcomp: compression setup failed on test "
- "%d for %s: error=%d\n", i + 1, algo, res);
- return res;
- }
-
- res = crypto_compress_init(tfm);
- if (res) {
- pr_err("alg: pcomp: compression init failed on test "
- "%d for %s: error=%d\n", i + 1, algo, res);
- return res;
- }
-
- memset(result, 0, sizeof(result));
-
- req.next_in = ctemplate[i].input;
- req.avail_in = ctemplate[i].inlen / 2;
- req.next_out = result;
- req.avail_out = ctemplate[i].outlen / 2;
-
- res = crypto_compress_update(tfm, &req);
- if (res < 0 && (res != -EAGAIN || req.avail_in)) {
- pr_err("alg: pcomp: compression update failed on test "
- "%d for %s: error=%d\n", i + 1, algo, res);
- return res;
- }
- if (res > 0)
- produced += res;
-
- /* Add remaining input data */
- req.avail_in += (ctemplate[i].inlen + 1) / 2;
-
- res = crypto_compress_update(tfm, &req);
- if (res < 0 && (res != -EAGAIN || req.avail_in)) {
- pr_err("alg: pcomp: compression update failed on test "
- "%d for %s: error=%d\n", i + 1, algo, res);
- return res;
- }
- if (res > 0)
- produced += res;
-
- /* Provide remaining output space */
- req.avail_out += COMP_BUF_SIZE - ctemplate[i].outlen / 2;
-
- res = crypto_compress_final(tfm, &req);
- if (res < 0) {
- pr_err("alg: pcomp: compression final failed on test "
- "%d for %s: error=%d\n", i + 1, algo, res);
- return res;
- }
- produced += res;
-
- if (COMP_BUF_SIZE - req.avail_out != ctemplate[i].outlen) {
- pr_err("alg: comp: Compression test %d failed for %s: "
- "output len = %d (expected %d)\n", i + 1, algo,
- COMP_BUF_SIZE - req.avail_out,
- ctemplate[i].outlen);
- return -EINVAL;
- }
-
- if (produced != ctemplate[i].outlen) {
- pr_err("alg: comp: Compression test %d failed for %s: "
- "returned len = %u (expected %d)\n", i + 1,
- algo, produced, ctemplate[i].outlen);
- return -EINVAL;
- }
-
- if (memcmp(result, ctemplate[i].output, ctemplate[i].outlen)) {
- pr_err("alg: pcomp: Compression test %d failed for "
- "%s\n", i + 1, algo);
- hexdump(result, ctemplate[i].outlen);
- return -EINVAL;
- }
- }
-
- for (i = 0; i < dtcount; i++) {
- struct comp_request req;
- unsigned int produced = 0;
-
- res = crypto_decompress_setup(tfm, dtemplate[i].params,
- dtemplate[i].paramsize);
- if (res) {
- pr_err("alg: pcomp: decompression setup failed on "
- "test %d for %s: error=%d\n", i + 1, algo, res);
- return res;
- }
-
- res = crypto_decompress_init(tfm);
- if (res) {
- pr_err("alg: pcomp: decompression init failed on test "
- "%d for %s: error=%d\n", i + 1, algo, res);
- return res;
- }
-
- memset(result, 0, sizeof(result));
-
- req.next_in = dtemplate[i].input;
- req.avail_in = dtemplate[i].inlen / 2;
- req.next_out = result;
- req.avail_out = dtemplate[i].outlen / 2;
-
- res = crypto_decompress_update(tfm, &req);
- if (res < 0 && (res != -EAGAIN || req.avail_in)) {
- pr_err("alg: pcomp: decompression update failed on "
- "test %d for %s: error=%d\n", i + 1, algo, res);
- return res;
- }
- if (res > 0)
- produced += res;
-
- /* Add remaining input data */
- req.avail_in += (dtemplate[i].inlen + 1) / 2;
-
- res = crypto_decompress_update(tfm, &req);
- if (res < 0 && (res != -EAGAIN || req.avail_in)) {
- pr_err("alg: pcomp: decompression update failed on "
- "test %d for %s: error=%d\n", i + 1, algo, res);
- return res;
- }
- if (res > 0)
- produced += res;
-
- /* Provide remaining output space */
- req.avail_out += COMP_BUF_SIZE - dtemplate[i].outlen / 2;
-
- res = crypto_decompress_final(tfm, &req);
- if (res < 0 && (res != -EAGAIN || req.avail_in)) {
- pr_err("alg: pcomp: decompression final failed on "
- "test %d for %s: error=%d\n", i + 1, algo, res);
- return res;
- }
- if (res > 0)
- produced += res;
-
- if (COMP_BUF_SIZE - req.avail_out != dtemplate[i].outlen) {
- pr_err("alg: comp: Decompression test %d failed for "
- "%s: output len = %d (expected %d)\n", i + 1,
- algo, COMP_BUF_SIZE - req.avail_out,
- dtemplate[i].outlen);
- return -EINVAL;
- }
-
- if (produced != dtemplate[i].outlen) {
- pr_err("alg: comp: Decompression test %d failed for "
- "%s: returned len = %u (expected %d)\n", i + 1,
- algo, produced, dtemplate[i].outlen);
- return -EINVAL;
- }
-
- if (memcmp(result, dtemplate[i].output, dtemplate[i].outlen)) {
- pr_err("alg: pcomp: Decompression test %d failed for "
- "%s\n", i + 1, algo);
- hexdump(result, dtemplate[i].outlen);
- return -EINVAL;
- }
- }
-
- return 0;
-}
-
-
static int test_cprng(struct crypto_rng *tfm, struct cprng_testvec *template,
unsigned int tcount)
{
@@ -1640,28 +1455,6 @@ static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
return err;
}

-static int alg_test_pcomp(const struct alg_test_desc *desc, const char *driver,
- u32 type, u32 mask)
-{
- struct crypto_pcomp *tfm;
- int err;
-
- tfm = crypto_alloc_pcomp(driver, type, mask);
- if (IS_ERR(tfm)) {
- pr_err("alg: pcomp: Failed to load transform for %s: %ld\n",
- driver, PTR_ERR(tfm));
- return PTR_ERR(tfm);
- }
-
- err = test_pcomp(tfm, desc->suite.pcomp.comp.vecs,
- desc->suite.pcomp.decomp.vecs,
- desc->suite.pcomp.comp.count,
- desc->suite.pcomp.decomp.count);
-
- crypto_free_pcomp(tfm);
- return err;
-}
-
static int alg_test_hash(const struct alg_test_desc *desc, const char *driver,
u32 type, u32 mask)
{
@@ -3840,22 +3633,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}
- }, {
- .alg = "zlib",
- .test = alg_test_pcomp,
- .fips_allowed = 1,
- .suite = {
- .pcomp = {
- .comp = {
- .vecs = zlib_comp_tv_template,
- .count = ZLIB_COMP_TEST_VECTORS
- },
- .decomp = {
- .vecs = zlib_decomp_tv_template,
- .count = ZLIB_DECOMP_TEST_VECTORS
- }
- }
- }
}
};

diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index da0a8fd..487ec88 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -25,9 +25,6 @@
#define _CRYPTO_TESTMGR_H

#include <linux/netlink.h>
-#include <linux/zlib.h>
-
-#include <crypto/compress.h>

#define MAX_DIGEST_SIZE 64
#define MAX_TAP 8
@@ -32268,14 +32265,6 @@ struct comp_testvec {
char output[COMP_BUF_SIZE];
};

-struct pcomp_testvec {
- const void *params;
- unsigned int paramsize;
- int inlen, outlen;
- char input[COMP_BUF_SIZE];
- char output[COMP_BUF_SIZE];
-};
-
/*
* Deflate test vectors (null-terminated strings).
* Params: winbits=-11, Z_DEFAULT_COMPRESSION, MAX_MEM_LEVEL.
@@ -32356,139 +32345,6 @@ static struct comp_testvec deflate_decomp_tv_template[] = {
},
};

-#define ZLIB_COMP_TEST_VECTORS 2
-#define ZLIB_DECOMP_TEST_VECTORS 2
-
-static const struct {
- struct nlattr nla;
- int val;
-} deflate_comp_params[] = {
- {
- .nla = {
- .nla_len = NLA_HDRLEN + sizeof(int),
- .nla_type = ZLIB_COMP_LEVEL,
- },
- .val = Z_DEFAULT_COMPRESSION,
- }, {
- .nla = {
- .nla_len = NLA_HDRLEN + sizeof(int),
- .nla_type = ZLIB_COMP_METHOD,
- },
- .val = Z_DEFLATED,
- }, {
- .nla = {
- .nla_len = NLA_HDRLEN + sizeof(int),
- .nla_type = ZLIB_COMP_WINDOWBITS,
- },
- .val = -11,
- }, {
- .nla = {
- .nla_len = NLA_HDRLEN + sizeof(int),
- .nla_type = ZLIB_COMP_MEMLEVEL,
- },
- .val = MAX_MEM_LEVEL,
- }, {
- .nla = {
- .nla_len = NLA_HDRLEN + sizeof(int),
- .nla_type = ZLIB_COMP_STRATEGY,
- },
- .val = Z_DEFAULT_STRATEGY,
- }
-};
-
-static const struct {
- struct nlattr nla;
- int val;
-} deflate_decomp_params[] = {
- {
- .nla = {
- .nla_len = NLA_HDRLEN + sizeof(int),
- .nla_type = ZLIB_DECOMP_WINDOWBITS,
- },
- .val = -11,
- }
-};
-
-static struct pcomp_testvec zlib_comp_tv_template[] = {
- {
- .params = &deflate_comp_params,
- .paramsize = sizeof(deflate_comp_params),
- .inlen = 70,
- .outlen = 38,
- .input = "Join us now and share the software "
- "Join us now and share the software ",
- .output = "\xf3\xca\xcf\xcc\x53\x28\x2d\x56"
- "\xc8\xcb\x2f\x57\x48\xcc\x4b\x51"
- "\x28\xce\x48\x2c\x4a\x55\x28\xc9"
- "\x48\x55\x28\xce\x4f\x2b\x29\x07"
- "\x71\xbc\x08\x2b\x01\x00",
- }, {
- .params = &deflate_comp_params,
- .paramsize = sizeof(deflate_comp_params),
- .inlen = 191,
- .outlen = 122,
- .input = "This document describes a compression method based on the DEFLATE"
- "compression algorithm. This document defines the application of "
- "the DEFLATE algorithm to the IP Payload Compression Protocol.",
- .output = "\x5d\x8d\x31\x0e\xc2\x30\x10\x04"
- "\xbf\xb2\x2f\xc8\x1f\x10\x04\x09"
- "\x89\xc2\x85\x3f\x70\xb1\x2f\xf8"
- "\x24\xdb\x67\xd9\x47\xc1\xef\x49"
- "\x68\x12\x51\xae\x76\x67\xd6\x27"
- "\x19\x88\x1a\xde\x85\xab\x21\xf2"
- "\x08\x5d\x16\x1e\x20\x04\x2d\xad"
- "\xf3\x18\xa2\x15\x85\x2d\x69\xc4"
- "\x42\x83\x23\xb6\x6c\x89\x71\x9b"
- "\xef\xcf\x8b\x9f\xcf\x33\xca\x2f"
- "\xed\x62\xa9\x4c\x80\xff\x13\xaf"
- "\x52\x37\xed\x0e\x52\x6b\x59\x02"
- "\xd9\x4e\xe8\x7a\x76\x1d\x02\x98"
- "\xfe\x8a\x87\x83\xa3\x4f\x56\x8a"
- "\xb8\x9e\x8e\x5c\x57\xd3\xa0\x79"
- "\xfa\x02",
- },
-};
-
-static struct pcomp_testvec zlib_decomp_tv_template[] = {
- {
- .params = &deflate_decomp_params,
- .paramsize = sizeof(deflate_decomp_params),
- .inlen = 122,
- .outlen = 191,
- .input = "\x5d\x8d\x31\x0e\xc2\x30\x10\x04"
- "\xbf\xb2\x2f\xc8\x1f\x10\x04\x09"
- "\x89\xc2\x85\x3f\x70\xb1\x2f\xf8"
- "\x24\xdb\x67\xd9\x47\xc1\xef\x49"
- "\x68\x12\x51\xae\x76\x67\xd6\x27"
- "\x19\x88\x1a\xde\x85\xab\x21\xf2"
- "\x08\x5d\x16\x1e\x20\x04\x2d\xad"
- "\xf3\x18\xa2\x15\x85\x2d\x69\xc4"
- "\x42\x83\x23\xb6\x6c\x89\x71\x9b"
- "\xef\xcf\x8b\x9f\xcf\x33\xca\x2f"
- "\xed\x62\xa9\x4c\x80\xff\x13\xaf"
- "\x52\x37\xed\x0e\x52\x6b\x59\x02"
- "\xd9\x4e\xe8\x7a\x76\x1d\x02\x98"
- "\xfe\x8a\x87\x83\xa3\x4f\x56\x8a"
- "\xb8\x9e\x8e\x5c\x57\xd3\xa0\x79"
- "\xfa\x02",
- .output = "This document describes a compression method based on the DEFLATE"
- "compression algorithm. This document defines the application of "
- "the DEFLATE algorithm to the IP Payload Compression Protocol.",
- }, {
- .params = &deflate_decomp_params,
- .paramsize = sizeof(deflate_decomp_params),
- .inlen = 38,
- .outlen = 70,
- .input = "\xf3\xca\xcf\xcc\x53\x28\x2d\x56"
- "\xc8\xcb\x2f\x57\x48\xcc\x4b\x51"
- "\x28\xce\x48\x2c\x4a\x55\x28\xc9"
- "\x48\x55\x28\xce\x4f\x2b\x29\x07"
- "\x71\xbc\x08\x2b\x01\x00",
- .output = "Join us now and share the software "
- "Join us now and share the software ",
- },
-};
-
/*
* LZO test vectors (null-terminated strings).
*/
diff --git a/crypto/zlib.c b/crypto/zlib.c
deleted file mode 100644
index d51a30a..0000000
--- a/crypto/zlib.c
+++ /dev/null
@@ -1,381 +0,0 @@
-/*
- * Cryptographic API.
- *
- * Zlib algorithm
- *
- * Copyright 2008 Sony Corporation
- *
- * Based on deflate.c, which is
- * Copyright (c) 2003 James Morris <[email protected]>
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the Free
- * Software Foundation; either version 2 of the License, or (at your option)
- * any later version.
- *
- * FIXME: deflate transforms will require up to a total of about 436k of kernel
- * memory on i386 (390k for compression, the rest for decompression), as the
- * current zlib kernel code uses a worst case pre-allocation system by default.
- * This needs to be fixed so that the amount of memory required is properly
- * related to the winbits and memlevel parameters.
- */
-
-#define pr_fmt(fmt) "%s: " fmt, __func__
-
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/zlib.h>
-#include <linux/vmalloc.h>
-#include <linux/interrupt.h>
-#include <linux/mm.h>
-#include <linux/net.h>
-
-#include <crypto/internal/compress.h>
-
-#include <net/netlink.h>
-
-
-struct zlib_ctx {
- struct z_stream_s comp_stream;
- struct z_stream_s decomp_stream;
- int decomp_windowBits;
-};
-
-
-static void zlib_comp_exit(struct zlib_ctx *ctx)
-{
- struct z_stream_s *stream = &ctx->comp_stream;
-
- if (stream->workspace) {
- zlib_deflateEnd(stream);
- vfree(stream->workspace);
- stream->workspace = NULL;
- }
-}
-
-static void zlib_decomp_exit(struct zlib_ctx *ctx)
-{
- struct z_stream_s *stream = &ctx->decomp_stream;
-
- if (stream->workspace) {
- zlib_inflateEnd(stream);
- vfree(stream->workspace);
- stream->workspace = NULL;
- }
-}
-
-static int zlib_init(struct crypto_tfm *tfm)
-{
- return 0;
-}
-
-static void zlib_exit(struct crypto_tfm *tfm)
-{
- struct zlib_ctx *ctx = crypto_tfm_ctx(tfm);
-
- zlib_comp_exit(ctx);
- zlib_decomp_exit(ctx);
-}
-
-
-static int zlib_compress_setup(struct crypto_pcomp *tfm, const void *params,
- unsigned int len)
-{
- struct zlib_ctx *ctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
- struct z_stream_s *stream = &ctx->comp_stream;
- struct nlattr *tb[ZLIB_COMP_MAX + 1];
- int window_bits, mem_level;
- size_t workspacesize;
- int ret;
-
- ret = nla_parse(tb, ZLIB_COMP_MAX, params, len, NULL);
- if (ret)
- return ret;
-
- zlib_comp_exit(ctx);
-
- window_bits = tb[ZLIB_COMP_WINDOWBITS]
- ? nla_get_u32(tb[ZLIB_COMP_WINDOWBITS])
- : MAX_WBITS;
- mem_level = tb[ZLIB_COMP_MEMLEVEL]
- ? nla_get_u32(tb[ZLIB_COMP_MEMLEVEL])
- : DEF_MEM_LEVEL;
-
- workspacesize = zlib_deflate_workspacesize(window_bits, mem_level);
- stream->workspace = vzalloc(workspacesize);
- if (!stream->workspace)
- return -ENOMEM;
-
- ret = zlib_deflateInit2(stream,
- tb[ZLIB_COMP_LEVEL]
- ? nla_get_u32(tb[ZLIB_COMP_LEVEL])
- : Z_DEFAULT_COMPRESSION,
- tb[ZLIB_COMP_METHOD]
- ? nla_get_u32(tb[ZLIB_COMP_METHOD])
- : Z_DEFLATED,
- window_bits,
- mem_level,
- tb[ZLIB_COMP_STRATEGY]
- ? nla_get_u32(tb[ZLIB_COMP_STRATEGY])
- : Z_DEFAULT_STRATEGY);
- if (ret != Z_OK) {
- vfree(stream->workspace);
- stream->workspace = NULL;
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int zlib_compress_init(struct crypto_pcomp *tfm)
-{
- int ret;
- struct zlib_ctx *dctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
- struct z_stream_s *stream = &dctx->comp_stream;
-
- ret = zlib_deflateReset(stream);
- if (ret != Z_OK)
- return -EINVAL;
-
- return 0;
-}
-
-static int zlib_compress_update(struct crypto_pcomp *tfm,
- struct comp_request *req)
-{
- int ret;
- struct zlib_ctx *dctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
- struct z_stream_s *stream = &dctx->comp_stream;
-
- pr_debug("avail_in %u, avail_out %u\n", req->avail_in, req->avail_out);
- stream->next_in = req->next_in;
- stream->avail_in = req->avail_in;
- stream->next_out = req->next_out;
- stream->avail_out = req->avail_out;
-
- ret = zlib_deflate(stream, Z_NO_FLUSH);
- switch (ret) {
- case Z_OK:
- break;
-
- case Z_BUF_ERROR:
- pr_debug("zlib_deflate could not make progress\n");
- return -EAGAIN;
-
- default:
- pr_debug("zlib_deflate failed %d\n", ret);
- return -EINVAL;
- }
-
- ret = req->avail_out - stream->avail_out;
- pr_debug("avail_in %lu, avail_out %lu (consumed %lu, produced %u)\n",
- stream->avail_in, stream->avail_out,
- req->avail_in - stream->avail_in, ret);
- req->next_in = stream->next_in;
- req->avail_in = stream->avail_in;
- req->next_out = stream->next_out;
- req->avail_out = stream->avail_out;
- return ret;
-}
-
-static int zlib_compress_final(struct crypto_pcomp *tfm,
- struct comp_request *req)
-{
- int ret;
- struct zlib_ctx *dctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
- struct z_stream_s *stream = &dctx->comp_stream;
-
- pr_debug("avail_in %u, avail_out %u\n", req->avail_in, req->avail_out);
- stream->next_in = req->next_in;
- stream->avail_in = req->avail_in;
- stream->next_out = req->next_out;
- stream->avail_out = req->avail_out;
-
- ret = zlib_deflate(stream, Z_FINISH);
- if (ret != Z_STREAM_END) {
- pr_debug("zlib_deflate failed %d\n", ret);
- return -EINVAL;
- }
-
- ret = req->avail_out - stream->avail_out;
- pr_debug("avail_in %lu, avail_out %lu (consumed %lu, produced %u)\n",
- stream->avail_in, stream->avail_out,
- req->avail_in - stream->avail_in, ret);
- req->next_in = stream->next_in;
- req->avail_in = stream->avail_in;
- req->next_out = stream->next_out;
- req->avail_out = stream->avail_out;
- return ret;
-}
-
-
-static int zlib_decompress_setup(struct crypto_pcomp *tfm, const void *params,
- unsigned int len)
-{
- struct zlib_ctx *ctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
- struct z_stream_s *stream = &ctx->decomp_stream;
- struct nlattr *tb[ZLIB_DECOMP_MAX + 1];
- int ret = 0;
-
- ret = nla_parse(tb, ZLIB_DECOMP_MAX, params, len, NULL);
- if (ret)
- return ret;
-
- zlib_decomp_exit(ctx);
-
- ctx->decomp_windowBits = tb[ZLIB_DECOMP_WINDOWBITS]
- ? nla_get_u32(tb[ZLIB_DECOMP_WINDOWBITS])
- : DEF_WBITS;
-
- stream->workspace = vzalloc(zlib_inflate_workspacesize());
- if (!stream->workspace)
- return -ENOMEM;
-
- ret = zlib_inflateInit2(stream, ctx->decomp_windowBits);
- if (ret != Z_OK) {
- vfree(stream->workspace);
- stream->workspace = NULL;
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int zlib_decompress_init(struct crypto_pcomp *tfm)
-{
- int ret;
- struct zlib_ctx *dctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
- struct z_stream_s *stream = &dctx->decomp_stream;
-
- ret = zlib_inflateReset(stream);
- if (ret != Z_OK)
- return -EINVAL;
-
- return 0;
-}
-
-static int zlib_decompress_update(struct crypto_pcomp *tfm,
- struct comp_request *req)
-{
- int ret;
- struct zlib_ctx *dctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
- struct z_stream_s *stream = &dctx->decomp_stream;
-
- pr_debug("avail_in %u, avail_out %u\n", req->avail_in, req->avail_out);
- stream->next_in = req->next_in;
- stream->avail_in = req->avail_in;
- stream->next_out = req->next_out;
- stream->avail_out = req->avail_out;
-
- ret = zlib_inflate(stream, Z_SYNC_FLUSH);
- switch (ret) {
- case Z_OK:
- case Z_STREAM_END:
- break;
-
- case Z_BUF_ERROR:
- pr_debug("zlib_inflate could not make progress\n");
- return -EAGAIN;
-
- default:
- pr_debug("zlib_inflate failed %d\n", ret);
- return -EINVAL;
- }
-
- ret = req->avail_out - stream->avail_out;
- pr_debug("avail_in %lu, avail_out %lu (consumed %lu, produced %u)\n",
- stream->avail_in, stream->avail_out,
- req->avail_in - stream->avail_in, ret);
- req->next_in = stream->next_in;
- req->avail_in = stream->avail_in;
- req->next_out = stream->next_out;
- req->avail_out = stream->avail_out;
- return ret;
-}
-
-static int zlib_decompress_final(struct crypto_pcomp *tfm,
- struct comp_request *req)
-{
- int ret;
- struct zlib_ctx *dctx = crypto_tfm_ctx(crypto_pcomp_tfm(tfm));
- struct z_stream_s *stream = &dctx->decomp_stream;
-
- pr_debug("avail_in %u, avail_out %u\n", req->avail_in, req->avail_out);
- stream->next_in = req->next_in;
- stream->avail_in = req->avail_in;
- stream->next_out = req->next_out;
- stream->avail_out = req->avail_out;
-
- if (dctx->decomp_windowBits < 0) {
- ret = zlib_inflate(stream, Z_SYNC_FLUSH);
- /*
- * Work around a bug in zlib, which sometimes wants to taste an
- * extra byte when being used in the (undocumented) raw deflate
- * mode. (From USAGI).
- */
- if (ret == Z_OK && !stream->avail_in && stream->avail_out) {
- const void *saved_next_in = stream->next_in;
- u8 zerostuff = 0;
-
- stream->next_in = &zerostuff;
- stream->avail_in = 1;
- ret = zlib_inflate(stream, Z_FINISH);
- stream->next_in = saved_next_in;
- stream->avail_in = 0;
- }
- } else
- ret = zlib_inflate(stream, Z_FINISH);
- if (ret != Z_STREAM_END) {
- pr_debug("zlib_inflate failed %d\n", ret);
- return -EINVAL;
- }
-
- ret = req->avail_out - stream->avail_out;
- pr_debug("avail_in %lu, avail_out %lu (consumed %lu, produced %u)\n",
- stream->avail_in, stream->avail_out,
- req->avail_in - stream->avail_in, ret);
- req->next_in = stream->next_in;
- req->avail_in = stream->avail_in;
- req->next_out = stream->next_out;
- req->avail_out = stream->avail_out;
- return ret;
-}
-
-
-static struct pcomp_alg zlib_alg = {
- .compress_setup = zlib_compress_setup,
- .compress_init = zlib_compress_init,
- .compress_update = zlib_compress_update,
- .compress_final = zlib_compress_final,
- .decompress_setup = zlib_decompress_setup,
- .decompress_init = zlib_decompress_init,
- .decompress_update = zlib_decompress_update,
- .decompress_final = zlib_decompress_final,
-
- .base = {
- .cra_name = "zlib",
- .cra_flags = CRYPTO_ALG_TYPE_PCOMPRESS,
- .cra_ctxsize = sizeof(struct zlib_ctx),
- .cra_module = THIS_MODULE,
- .cra_init = zlib_init,
- .cra_exit = zlib_exit,
- }
-};
-
-static int __init zlib_mod_init(void)
-{
- return crypto_register_pcomp(&zlib_alg);
-}
-
-static void __exit zlib_mod_fini(void)
-{
- crypto_unregister_pcomp(&zlib_alg);
-}
-
-module_init(zlib_mod_init);
-module_exit(zlib_mod_fini);
-
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("Zlib Compression Algorithm");
-MODULE_AUTHOR("Sony Corporation");
-MODULE_ALIAS_CRYPTO("zlib");
diff --git a/include/crypto/compress.h b/include/crypto/compress.h
deleted file mode 100644
index 5b67af8..0000000
--- a/include/crypto/compress.h
+++ /dev/null
@@ -1,145 +0,0 @@
-/*
- * Compress: Compression algorithms under the cryptographic API.
- *
- * Copyright 2008 Sony Corporation
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; version 2 of the License.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.
- * If not, see <http://www.gnu.org/licenses/>.
- */
-
-#ifndef _CRYPTO_COMPRESS_H
-#define _CRYPTO_COMPRESS_H
-
-#include <linux/crypto.h>
-
-
-struct comp_request {
- const void *next_in; /* next input byte */
- void *next_out; /* next output byte */
- unsigned int avail_in; /* bytes available at next_in */
- unsigned int avail_out; /* bytes available at next_out */
-};
-
-enum zlib_comp_params {
- ZLIB_COMP_LEVEL = 1, /* e.g. Z_DEFAULT_COMPRESSION */
- ZLIB_COMP_METHOD, /* e.g. Z_DEFLATED */
- ZLIB_COMP_WINDOWBITS, /* e.g. MAX_WBITS */
- ZLIB_COMP_MEMLEVEL, /* e.g. DEF_MEM_LEVEL */
- ZLIB_COMP_STRATEGY, /* e.g. Z_DEFAULT_STRATEGY */
- __ZLIB_COMP_MAX,
-};
-
-#define ZLIB_COMP_MAX (__ZLIB_COMP_MAX - 1)
-
-
-enum zlib_decomp_params {
- ZLIB_DECOMP_WINDOWBITS = 1, /* e.g. DEF_WBITS */
- __ZLIB_DECOMP_MAX,
-};
-
-#define ZLIB_DECOMP_MAX (__ZLIB_DECOMP_MAX - 1)
-
-
-struct crypto_pcomp {
- struct crypto_tfm base;
-};
-
-struct pcomp_alg {
- int (*compress_setup)(struct crypto_pcomp *tfm, const void *params,
- unsigned int len);
- int (*compress_init)(struct crypto_pcomp *tfm);
- int (*compress_update)(struct crypto_pcomp *tfm,
- struct comp_request *req);
- int (*compress_final)(struct crypto_pcomp *tfm,
- struct comp_request *req);
- int (*decompress_setup)(struct crypto_pcomp *tfm, const void *params,
- unsigned int len);
- int (*decompress_init)(struct crypto_pcomp *tfm);
- int (*decompress_update)(struct crypto_pcomp *tfm,
- struct comp_request *req);
- int (*decompress_final)(struct crypto_pcomp *tfm,
- struct comp_request *req);
-
- struct crypto_alg base;
-};
-
-extern struct crypto_pcomp *crypto_alloc_pcomp(const char *alg_name, u32 type,
- u32 mask);
-
-static inline struct crypto_tfm *crypto_pcomp_tfm(struct crypto_pcomp *tfm)
-{
- return &tfm->base;
-}
-
-static inline void crypto_free_pcomp(struct crypto_pcomp *tfm)
-{
- crypto_destroy_tfm(tfm, crypto_pcomp_tfm(tfm));
-}
-
-static inline struct pcomp_alg *__crypto_pcomp_alg(struct crypto_alg *alg)
-{
- return container_of(alg, struct pcomp_alg, base);
-}
-
-static inline struct pcomp_alg *crypto_pcomp_alg(struct crypto_pcomp *tfm)
-{
- return __crypto_pcomp_alg(crypto_pcomp_tfm(tfm)->__crt_alg);
-}
-
-static inline int crypto_compress_setup(struct crypto_pcomp *tfm,
- const void *params, unsigned int len)
-{
- return crypto_pcomp_alg(tfm)->compress_setup(tfm, params, len);
-}
-
-static inline int crypto_compress_init(struct crypto_pcomp *tfm)
-{
- return crypto_pcomp_alg(tfm)->compress_init(tfm);
-}
-
-static inline int crypto_compress_update(struct crypto_pcomp *tfm,
- struct comp_request *req)
-{
- return crypto_pcomp_alg(tfm)->compress_update(tfm, req);
-}
-
-static inline int crypto_compress_final(struct crypto_pcomp *tfm,
- struct comp_request *req)
-{
- return crypto_pcomp_alg(tfm)->compress_final(tfm, req);
-}
-
-static inline int crypto_decompress_setup(struct crypto_pcomp *tfm,
- const void *params, unsigned int len)
-{
- return crypto_pcomp_alg(tfm)->decompress_setup(tfm, params, len);
-}
-
-static inline int crypto_decompress_init(struct crypto_pcomp *tfm)
-{
- return crypto_pcomp_alg(tfm)->decompress_init(tfm);
-}
-
-static inline int crypto_decompress_update(struct crypto_pcomp *tfm,
- struct comp_request *req)
-{
- return crypto_pcomp_alg(tfm)->decompress_update(tfm, req);
-}
-
-static inline int crypto_decompress_final(struct crypto_pcomp *tfm,
- struct comp_request *req)
-{
- return crypto_pcomp_alg(tfm)->decompress_final(tfm, req);
-}
-
-#endif /* _CRYPTO_COMPRESS_H */
diff --git a/include/crypto/internal/compress.h b/include/crypto/internal/compress.h
deleted file mode 100644
index 178a888..0000000
--- a/include/crypto/internal/compress.h
+++ /dev/null
@@ -1,28 +0,0 @@
-/*
- * Compress: Compression algorithms under the cryptographic API.
- *
- * Copyright 2008 Sony Corporation
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; version 2 of the License.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.
- * If not, see <http://www.gnu.org/licenses/>.
- */
-
-#ifndef _CRYPTO_INTERNAL_COMPRESS_H
-#define _CRYPTO_INTERNAL_COMPRESS_H
-
-#include <crypto/compress.h>
-
-extern int crypto_register_pcomp(struct pcomp_alg *alg);
-extern int crypto_unregister_pcomp(struct pcomp_alg *alg);
-
-#endif /* _CRYPTO_INTERNAL_COMPRESS_H */
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index e71cb70..ab2a745 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -54,7 +54,6 @@
#define CRYPTO_ALG_TYPE_AHASH 0x0000000a
#define CRYPTO_ALG_TYPE_RNG 0x0000000c
#define CRYPTO_ALG_TYPE_AKCIPHER 0x0000000d
-#define CRYPTO_ALG_TYPE_PCOMPRESS 0x0000000f

#define CRYPTO_ALG_TYPE_HASH_MASK 0x0000000e
#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000c
--
1.9.1

2016-01-26 08:15:45

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

From: Weigang Li <[email protected]>

Now, asynchronous compression APIs are supported. There is no asynchronous
compression driver now but this APIs can be used as front-end to
synchronous compression algorithm. In this case, scatterlist would be
linearlized when needed so it would cause some overhead.

Signed-off-by: Weigang Li <[email protected]>
Signed-off-by: Joonsoo Kim <[email protected]>
---
crypto/Kconfig | 3 +-
crypto/Makefile | 3 +-
crypto/acompress.c | 164 ++++++++++++++++++++++++
crypto/scompress.c | 170 +++++++++++++++++++++++++
include/crypto/compress.h | 253 +++++++++++++++++++++++++++++++++++++
include/crypto/internal/compress.h | 4 +
include/linux/crypto.h | 2 +
7 files changed, 596 insertions(+), 3 deletions(-)
create mode 100644 crypto/acompress.c
create mode 100644 include/crypto/internal/compress.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 7159520..f22f4e9 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -84,7 +84,7 @@ config CRYPTO_RNG_DEFAULT
tristate
select CRYPTO_DRBG_MENU

-config CRYPTO_SCOMPRESS
+config CRYPTO_COMPRESS2
tristate
select CRYPTO_ALGAPI2

@@ -1503,7 +1503,6 @@ config CRYPTO_LZO
select CRYPTO_ALGAPI
select LZO_COMPRESS
select LZO_DECOMPRESS
- select SCOMPRESS
help
This is the LZO algorithm.

diff --git a/crypto/Makefile b/crypto/Makefile
index 16ef796..9157d69 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -28,7 +28,8 @@ crypto_hash-y += ahash.o
crypto_hash-y += shash.o
obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o

-obj-$(CONFIG_CRYPTO_SCOMPRESS) += scompress.o
+crypto_compress-y += scompress.o acompress.o
+obj-$(CONFIG_CRYPTO_COMPRESS2) += crypto_compress.o
obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o

$(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
diff --git a/crypto/acompress.c b/crypto/acompress.c
new file mode 100644
index 0000000..ddaa5a0
--- /dev/null
+++ b/crypto/acompress.c
@@ -0,0 +1,164 @@
+/*
+ * Asynchronous compression operations
+ *
+ * Copyright (c) 2015, Intel Corporation
+ * Authors: Weigang Li <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/crypto.h>
+
+#include <crypto/algapi.h>
+#include <linux/cryptouser.h>
+#include <net/netlink.h>
+#include <crypto/compress.h>
+#include <crypto/internal/compress.h>
+#include "internal.h"
+
+const struct crypto_type crypto_acomp_type;
+
+#ifdef CONFIG_NET
+static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+ struct crypto_report_comp racomp;
+
+ strncpy(racomp.type, "acomp", sizeof(racomp.type));
+
+ if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS,
+ sizeof(struct crypto_report_comp), &racomp))
+ goto nla_put_failure;
+ return 0;
+
+nla_put_failure:
+ return -EMSGSIZE;
+}
+#else
+static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+ return -ENOSYS;
+}
+#endif
+
+static void crypto_acomp_show(struct seq_file *m, struct crypto_alg *alg)
+ __attribute__ ((unused));
+
+static void crypto_acomp_show(struct seq_file *m, struct crypto_alg *alg)
+{
+ seq_puts(m, "type : acomp\n");
+}
+
+static void crypto_acomp_exit_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
+ struct acomp_alg *alg = crypto_acomp_alg(acomp);
+
+ alg->exit(acomp);
+}
+
+static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
+ struct acomp_alg *alg = crypto_acomp_alg(acomp);
+
+ if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
+ return crypto_init_scomp_ops_async(tfm);
+
+ acomp->compress = alg->compress;
+ acomp->decompress = alg->decompress;
+
+ if (alg->exit)
+ acomp->base.exit = crypto_acomp_exit_tfm;
+
+ if (alg->init)
+ return alg->init(acomp);
+
+ return 0;
+}
+
+static unsigned int crypto_acomp_extsize(struct crypto_alg *alg)
+{
+ if (alg->cra_type == &crypto_acomp_type)
+ return alg->cra_ctxsize;
+
+ return sizeof(void *);
+}
+
+const struct crypto_type crypto_acomp_type = {
+ .extsize = crypto_acomp_extsize,
+ .init_tfm = crypto_acomp_init_tfm,
+#ifdef CONFIG_PROC_FS
+ .show = crypto_acomp_show,
+#endif
+ .report = crypto_acomp_report,
+ .maskclear = ~CRYPTO_ALG_TYPE_MASK,
+ .maskset = CRYPTO_ALG_TYPE_ACOMPRESS_MASK,
+ .type = CRYPTO_ALG_TYPE_ACOMPRESS,
+ .tfmsize = offsetof(struct crypto_acomp, base),
+};
+EXPORT_SYMBOL_GPL(crypto_acomp_type);
+
+struct crypto_acomp *crypto_alloc_acomp(const char *alg_name, u32 type,
+ u32 mask)
+{
+ return crypto_alloc_tfm(alg_name, &crypto_acomp_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_acomp);
+
+struct acomp_req *acomp_request_alloc(struct crypto_acomp *acomp,
+ gfp_t gfp)
+{
+ struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
+ struct acomp_req *req;
+
+ if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
+ return crypto_scomp_acomp_request_alloc(acomp, gfp);
+
+ req = kzalloc(sizeof(*req) + crypto_acomp_reqsize(acomp), gfp);
+ if (likely(req))
+ acomp_request_set_tfm(req, acomp);
+
+ return req;
+}
+EXPORT_SYMBOL_GPL(acomp_request_alloc);
+
+void acomp_request_free(struct acomp_req *req)
+{
+ struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
+ struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
+
+ if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
+ return crypto_scomp_acomp_request_free(req);
+
+ kfree(req);
+}
+EXPORT_SYMBOL_GPL(acomp_request_free);
+
+int crypto_register_acomp(struct acomp_alg *alg)
+{
+ struct crypto_alg *base = &alg->base;
+
+ base->cra_type = &crypto_acomp_type;
+ base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+ base->cra_flags |= CRYPTO_ALG_TYPE_ACOMPRESS;
+ return crypto_register_alg(base);
+}
+EXPORT_SYMBOL_GPL(crypto_register_acomp);
+
+int crypto_unregister_acomp(struct acomp_alg *alg)
+{
+ return crypto_unregister_alg(&alg->base);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_acomp);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Asynchronous compression type");
diff --git a/crypto/scompress.c b/crypto/scompress.c
index 7c9955b..e5ebf24 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -24,8 +24,12 @@
#include <linux/module.h>
#include <linux/seq_file.h>
#include <linux/cryptouser.h>
+#include <linux/vmalloc.h>
+#include <linux/highmem.h>
+#include <linux/gfp.h>

#include <crypto/compress.h>
+#include <crypto/scatterwalk.h>
#include <net/netlink.h>

#include "internal.h"
@@ -90,6 +94,172 @@ struct crypto_scomp *crypto_alloc_scomp(const char *alg_name, u32 type,
}
EXPORT_SYMBOL_GPL(crypto_alloc_scomp);

+static void *scomp_map(struct scatterlist *sg, unsigned int len)
+{
+ gfp_t gfp_flags;
+ void *buf;
+
+ if (sg_is_last(sg))
+ return kmap_atomic(sg_page(sg)) + sg->offset;
+
+ if (in_atomic() || irqs_disabled())
+ gfp_flags = GFP_ATOMIC;
+ else
+ gfp_flags = GFP_KERNEL;
+
+ buf = kmalloc(len, gfp_flags);
+ if (!buf)
+ return NULL;
+
+ scatterwalk_map_and_copy(buf, sg, 0, len, 0);
+
+ return buf;
+}
+
+static void scomp_unmap(struct scatterlist *sg, void *buf, unsigned int len)
+{
+ if (!buf)
+ return;
+
+ if (sg_is_last(sg)) {
+ kunmap_atomic(buf);
+ return;
+ }
+
+ scatterwalk_map_and_copy(buf, sg, 0, len, 1);
+ kfree(buf);
+}
+
+static int scomp_acomp_compress(struct acomp_req *req,
+ struct crypto_acomp *tfm)
+{
+ int ret;
+ void **tfm_ctx = crypto_acomp_ctx(tfm);
+ struct crypto_scomp *scomp = (struct crypto_scomp *)*tfm_ctx;
+ void *ctx = *(req->__ctx);
+ char *src = scomp_map(req->src, req->src_len);
+ char *dst = scomp_map(req->dst, req->dst_len);
+
+ if (!src || !dst) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ req->out_len = req->dst_len;
+ ret = crypto_scomp_compress(scomp, src, req->src_len,
+ dst, &req->out_len, ctx);
+
+out:
+ scomp_unmap(req->src, src, 0);
+ scomp_unmap(req->dst, dst, ret ? 0 : req->out_len);
+
+ return ret;
+}
+
+static int scomp_async_compress(struct acomp_req *req)
+{
+ return scomp_acomp_compress(req, crypto_acomp_reqtfm(req));
+}
+
+static int scomp_acomp_decompress(struct acomp_req *req,
+ struct crypto_acomp *tfm)
+{
+ int ret;
+ void **tfm_ctx = crypto_acomp_ctx(tfm);
+ struct crypto_scomp *scomp = (struct crypto_scomp *)*tfm_ctx;
+ void *ctx = *(req->__ctx);
+ char *src = scomp_map(req->src, req->src_len);
+ char *dst = scomp_map(req->dst, req->dst_len);
+
+ if (!src || !dst) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ req->out_len = req->dst_len;
+ ret = crypto_scomp_decompress(scomp, src, req->src_len,
+ dst, &req->out_len, ctx);
+
+out:
+ scomp_unmap(req->src, src, 0);
+ scomp_unmap(req->dst, dst, ret ? 0 : req->out_len);
+
+ return ret;
+}
+
+static int scomp_async_decompress(struct acomp_req *req)
+{
+ return scomp_acomp_decompress(req, crypto_acomp_reqtfm(req));
+}
+
+static void crypto_exit_scomp_ops_async(struct crypto_tfm *tfm)
+{
+ struct crypto_scomp **ctx = crypto_tfm_ctx(tfm);
+
+ crypto_free_scomp(*ctx);
+}
+
+int crypto_init_scomp_ops_async(struct crypto_tfm *tfm)
+{
+ struct crypto_alg *calg = tfm->__crt_alg;
+ struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
+ struct crypto_scomp *scomp;
+ void **ctx = crypto_tfm_ctx(tfm);
+
+ if (!crypto_mod_get(calg))
+ return -EAGAIN;
+
+ scomp = crypto_create_tfm(calg, &crypto_scomp_type);
+ if (IS_ERR(scomp)) {
+ crypto_mod_put(calg);
+ return PTR_ERR(scomp);
+ }
+
+ *ctx = scomp;
+ tfm->exit = crypto_exit_scomp_ops_async;
+
+ acomp->compress = scomp_async_compress;
+ acomp->decompress = scomp_async_decompress;
+ acomp->reqsize = sizeof(void *);
+
+ return 0;
+}
+
+struct acomp_req *crypto_scomp_acomp_request_alloc(struct crypto_acomp *tfm,
+ gfp_t gfp)
+{
+ void **tfm_ctx = crypto_acomp_ctx(tfm);
+ struct crypto_scomp *scomp = (struct crypto_scomp *)*tfm_ctx;
+ struct acomp_req *req;
+ void *ctx;
+
+ req = kzalloc(sizeof(*req) + crypto_acomp_reqsize(tfm), gfp);
+ if (!req)
+ return NULL;
+
+ ctx = crypto_scomp_alloc_ctx(scomp);
+ if (IS_ERR(ctx)) {
+ kfree(req);
+ return NULL;
+ }
+
+ *(req->__ctx) = ctx;
+ acomp_request_set_tfm(req, tfm);
+
+ return req;
+}
+
+void crypto_scomp_acomp_request_free(struct acomp_req *req)
+{
+ struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
+ void **tfm_ctx = crypto_acomp_ctx(tfm);
+ struct crypto_scomp *scomp = (struct crypto_scomp *)*tfm_ctx;
+ void *ctx = *(req->__ctx);
+
+ crypto_scomp_free_ctx(scomp, ctx);
+ kfree(req);
+}
+
int crypto_register_scomp(struct scomp_alg *alg)
{
struct crypto_alg *base = &alg->base;
diff --git a/include/crypto/compress.h b/include/crypto/compress.h
index e4053fc..5b6332d 100644
--- a/include/crypto/compress.h
+++ b/include/crypto/compress.h
@@ -90,4 +90,257 @@ static inline bool crypto_scomp_decomp_noctx(struct crypto_scomp *tfm)

extern int crypto_register_scomp(struct scomp_alg *alg);
extern int crypto_unregister_scomp(struct scomp_alg *alg);
+
+/**
+ * struct acomp_req - asynchronous compression request
+ *
+ * @base: Common attributes for async crypto requests
+ * @src: Pointer to memory containing the input scatterlist buffer
+ * @dst: Pointer to memory containing the output scatterlist buffer
+ * @src_len: Length of input buffer
+ * @dst_len: Length of output buffer
+ * @out_len: Number of bytes produced by (de)compressor
+ * @__ctx: Start of private context data
+ */
+struct acomp_req {
+ struct crypto_async_request base;
+ struct scatterlist *src;
+ struct scatterlist *dst;
+ unsigned int src_len;
+ unsigned int dst_len;
+ unsigned int out_len;
+ void *__ctx[] CRYPTO_MINALIGN_ATTR;
+};
+
+/**
+ * struct crypto_acomp - user-instantiated objects which encapsulate
+ * algorithms and core processing logic
+ *
+ * @compress: Function performs a compress operation
+ * @decompress: Function performs a de-compress operation
+ * @reqsize: Request size required by algorithm implementation
+ * @base: Common crypto API algorithm data structure
+ */
+struct crypto_acomp {
+ int (*compress)(struct acomp_req *req);
+ int (*decompress)(struct acomp_req *req);
+ unsigned int reqsize;
+ struct crypto_tfm base;
+};
+
+/**
+ * struct acomp_alg - async compression algorithm
+ *
+ * @compress: Function performs a compress operation
+ * @decompress: Function performs a de-compress operation
+ * @init: Initialize the cryptographic transformation object.
+ * This function is used to initialize the cryptographic
+ * transformation object. This function is called only once at
+ * the instantiation time, right after the transformation context
+ * was allocated. In case the cryptographic hardware has some
+ * special requirements which need to be handled by software, this
+ * function shall check for the precise requirement of the
+ * transformation and put any software fallbacks in place.
+ * @exit: Deinitialize the cryptographic transformation object. This is a
+ * counterpart to @init, used to remove various changes set in
+ * @init.
+ *
+ * @base: Common crypto API algorithm data structure
+ */
+struct acomp_alg {
+ int (*compress)(struct acomp_req *req);
+ int (*decompress)(struct acomp_req *req);
+ int (*init)(struct crypto_acomp *tfm);
+ void (*exit)(struct crypto_acomp *tfm);
+ struct crypto_alg base;
+};
+
+/**
+ * DOC: Asynchronous Compression API
+ *
+ * The Asynchronous Compression API is used with the algorithms of type
+ * CRYPTO_ALG_TYPE_ACOMPRESS (listed as type "acomp" in /proc/crypto)
+ */
+
+/**
+ * crypto_alloc_acompress() -- allocate ACOMPRESS tfm handle
+ * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
+ * compression algorithm e.g. "deflate"
+ * @type: specifies the type of the algorithm
+ * @mask: specifies the mask for the algorithm
+ *
+ * Allocate a handle for compression algorithm. The returned struct
+ * crypto_acomp is the handle that is required for any subsequent
+ * API invocation for the compression operations.
+ *
+ * Return: allocated handle in case of success; IS_ERR() is true in case
+ * of an error, PTR_ERR() returns the error code.
+ */
+struct crypto_acomp *crypto_alloc_acomp(const char *alg_name, u32 type,
+ u32 mask);
+
+static inline struct crypto_tfm *crypto_acomp_tfm(struct crypto_acomp *tfm)
+{
+ return &tfm->base;
+}
+
+static inline struct crypto_acomp *crypto_acomp_cast(struct crypto_tfm *tfm)
+{
+ return (struct crypto_acomp *)tfm;
+}
+
+static inline void *crypto_acomp_ctx(struct crypto_acomp *tfm)
+{
+ return crypto_tfm_ctx(crypto_acomp_tfm(tfm));
+}
+
+static inline struct acomp_alg *__crypto_acomp_alg(struct crypto_alg *alg)
+{
+ return container_of(alg, struct acomp_alg, base);
+}
+
+static inline struct crypto_acomp *__crypto_acomp_tfm(
+ struct crypto_tfm *tfm)
+{
+ return container_of(tfm, struct crypto_acomp, base);
+}
+
+static inline struct acomp_alg *crypto_acomp_alg(
+ struct crypto_acomp *tfm)
+{
+ return __crypto_acomp_alg(crypto_acomp_tfm(tfm)->__crt_alg);
+}
+
+static inline unsigned int crypto_acomp_reqsize(struct crypto_acomp *tfm)
+{
+ return tfm->reqsize;
+}
+
+static inline void acomp_request_set_tfm(struct acomp_req *req,
+ struct crypto_acomp *tfm)
+{
+ req->base.tfm = crypto_acomp_tfm(tfm);
+}
+
+static inline struct crypto_acomp *crypto_acomp_reqtfm(
+ struct acomp_req *req)
+{
+ return __crypto_acomp_tfm(req->base.tfm);
+}
+
+/**
+ * crypto_free_acomp() -- free ACOMPRESS tfm handle
+ *
+ * @tfm: ACOMPRESS tfm handle allocated with crypto_alloc_acompr()
+ */
+static inline void crypto_free_acomp(struct crypto_acomp *tfm)
+{
+ crypto_destroy_tfm(tfm, crypto_acomp_tfm(tfm));
+}
+
+static inline int crypto_has_acomp(const char *alg_name, u32 type, u32 mask)
+{
+ type &= ~CRYPTO_ALG_TYPE_MASK;
+ type |= CRYPTO_ALG_TYPE_ACOMPRESS;
+ mask |= CRYPTO_ALG_TYPE_MASK;
+
+ return crypto_has_alg(alg_name, type, mask);
+}
+
+/**
+ * acomp_request_alloc() -- allocates async compress request
+ *
+ * @tfm: ACOMPRESS tfm handle allocated with crypto_alloc_acomp()
+ * @gfp: allocation flags
+ *
+ * Return: allocated handle in case of success or NULL in case of an error.
+ */
+struct acomp_req *acomp_request_alloc(struct crypto_acomp *acomp,
+ gfp_t gfp);
+
+/**
+ * acomp_request_free() -- zeroize and free async compress request
+ *
+ * @req: request to free
+ */
+void acomp_request_free(struct acomp_req *acomp);
+
+/**
+ * acomp_request_set_callback() -- Sets an asynchronous callback.
+ *
+ * Callback will be called when an asynchronous operation on a given
+ * request is finished.
+ *
+ * @req: request that the callback will be set for
+ * @flgs: specify for instance if the operation may backlog
+ * @cmlp: callback which will be called
+ * @data: private data used by the caller
+ */
+static inline void acomp_request_set_callback(struct acomp_req *req, u32 flgs,
+ crypto_completion_t cmpl, void *data)
+{
+ req->base.complete = cmpl;
+ req->base.data = data;
+ req->base.flags = flgs;
+}
+
+/**
+ * acomp_request_set_comp() -- Sets reqest parameters
+ *
+ * Sets parameters required by acomp operation
+ *
+ * @req: async compress request
+ * @src: ptr to input buffer list
+ * @dst: ptr to output buffer list
+ * @src_len: size of the input buffer
+ * @dst_len: size of the output buffer
+ * @result: (de)compression result returned by compressor
+ */
+static inline void acomp_request_set_comp(struct acomp_req *req,
+ struct scatterlist *src,
+ struct scatterlist *dst,
+ unsigned int src_len,
+ unsigned int dst_len)
+{
+ req->src = src;
+ req->dst = dst;
+ req->src_len = src_len;
+ req->dst_len = dst_len;
+ req->out_len = 0;
+}
+
+/**
+ * crypto_acomp_compress() -- Invoke async compress operation
+ *
+ * Function invokes the async compress operation
+ *
+ * @req: async compress request
+ *
+ * Return: zero on success; error code in case of error
+ */
+static inline int crypto_acomp_compress(struct acomp_req *req)
+{
+ struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
+
+ return tfm->compress(req);
+}
+
+/**
+ * crypto_acomp_decompress() -- Invoke async decompress operation
+ *
+ * Function invokes the async decompress operation
+ *
+ * @req: async compress request
+ *
+ * Return: zero on success; error code in case of error
+ */
+static inline int crypto_acomp_decompress(struct acomp_req *req)
+{
+ struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
+
+ return tfm->decompress(req);
+}
+
+extern int crypto_register_acomp(struct acomp_alg *alg);
+extern int crypto_unregister_acomp(struct acomp_alg *alg);
#endif
diff --git a/include/crypto/internal/compress.h b/include/crypto/internal/compress.h
new file mode 100644
index 0000000..088bc5b
--- /dev/null
+++ b/include/crypto/internal/compress.h
@@ -0,0 +1,4 @@
+extern int crypto_init_scomp_ops_async(struct crypto_tfm *tfm);
+extern struct acomp_req *
+crypto_scomp_acomp_request_alloc(struct crypto_acomp *tfm, gfp_t gfp);
+extern void crypto_scomp_acomp_request_free(struct acomp_req *req);
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index ba73c18..ccd1d32 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -55,9 +55,11 @@
#define CRYPTO_ALG_TYPE_RNG 0x0000000c
#define CRYPTO_ALG_TYPE_AKCIPHER 0x0000000d
#define CRYPTO_ALG_TYPE_SCOMPRESS 0x0000000e
+#define CRYPTO_ALG_TYPE_ACOMPRESS 0x0000000f

#define CRYPTO_ALG_TYPE_HASH_MASK 0x0000000e
#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000c
+#define CRYPTO_ALG_TYPE_ACOMPRESS_MASK 0x0000000e
#define CRYPTO_ALG_TYPE_BLKCIPHER_MASK 0x0000000c

#define CRYPTO_ALG_LARVAL 0x00000010
--
1.9.1

2016-01-26 08:15:05

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 03/10] crypto/compress: introduce sychronuous compression API

This introduces new compression APIs. Major change is that APIs are
stateless. Instead of previous implementation, tfm objects doesn't
embedded any context so we can de/compress concurrently with one tfm
object. Instead, this de/compression context is coupled with the request.
This architecture change will make APIs more flexible and we can naturally
use asynchronous APIs as front-end of synchronous compression algorithm.

Moreover, thanks to this change, we can decompress without context buffer
if algorithm supports it. You can check it by crypto_scomp_decomp_noctx()
and in this case we can achieve maximum parallelism without
memory overhead caused by context buffer.

Signed-off-by: Joonsoo Kim <[email protected]>
---
crypto/Kconfig | 5 ++
crypto/Makefile | 1 +
crypto/scompress.c | 114 ++++++++++++++++++++++++++++++++++++++++++++++
include/crypto/compress.h | 93 +++++++++++++++++++++++++++++++++++++
include/linux/crypto.h | 1 +
5 files changed, 214 insertions(+)
create mode 100644 crypto/scompress.c
create mode 100644 include/crypto/compress.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index c80d34f..7159520 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -84,6 +84,10 @@ config CRYPTO_RNG_DEFAULT
tristate
select CRYPTO_DRBG_MENU

+config CRYPTO_SCOMPRESS
+ tristate
+ select CRYPTO_ALGAPI2
+
config CRYPTO_AKCIPHER2
tristate
select CRYPTO_ALGAPI2
@@ -1499,6 +1503,7 @@ config CRYPTO_LZO
select CRYPTO_ALGAPI
select LZO_COMPRESS
select LZO_DECOMPRESS
+ select SCOMPRESS
help
This is the LZO algorithm.

diff --git a/crypto/Makefile b/crypto/Makefile
index ffe18c9..16ef796 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -28,6 +28,7 @@ crypto_hash-y += ahash.o
crypto_hash-y += shash.o
obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o

+obj-$(CONFIG_CRYPTO_SCOMPRESS) += scompress.o
obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o

$(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
diff --git a/crypto/scompress.c b/crypto/scompress.c
new file mode 100644
index 0000000..7c9955b
--- /dev/null
+++ b/crypto/scompress.c
@@ -0,0 +1,114 @@
+/*
+ * Cryptographic API.
+ *
+ * Synchronous compression operations.
+ *
+ * Copyright 2015 LG Electronics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.
+ * If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/crypto.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/seq_file.h>
+#include <linux/cryptouser.h>
+
+#include <crypto/compress.h>
+#include <net/netlink.h>
+
+#include "internal.h"
+
+
+static int crypto_scomp_init(struct crypto_tfm *tfm, u32 type, u32 mask)
+{
+ return 0;
+}
+
+static int crypto_scomp_init_tfm(struct crypto_tfm *tfm)
+{
+ return 0;
+}
+
+#ifdef CONFIG_NET
+static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+ struct crypto_report_comp rcomp;
+
+ strncpy(rcomp.type, "scomp", sizeof(rcomp.type));
+ if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS,
+ sizeof(struct crypto_report_comp), &rcomp))
+ goto nla_put_failure;
+ return 0;
+
+nla_put_failure:
+ return -EMSGSIZE;
+}
+#else
+static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+ return -ENOSYS;
+}
+#endif
+
+static void crypto_scomp_show(struct seq_file *m, struct crypto_alg *alg)
+ __attribute__ ((unused));
+static void crypto_scomp_show(struct seq_file *m, struct crypto_alg *alg)
+{
+ seq_puts(m, "type : scomp\n");
+}
+
+static const struct crypto_type crypto_scomp_type = {
+ .extsize = crypto_alg_extsize,
+ .init = crypto_scomp_init,
+ .init_tfm = crypto_scomp_init_tfm,
+#ifdef CONFIG_PROC_FS
+ .show = crypto_scomp_show,
+#endif
+ .report = crypto_scomp_report,
+ .maskclear = ~CRYPTO_ALG_TYPE_MASK,
+ .maskset = CRYPTO_ALG_TYPE_MASK,
+ .type = CRYPTO_ALG_TYPE_SCOMPRESS,
+ .tfmsize = offsetof(struct crypto_scomp, base),
+};
+
+struct crypto_scomp *crypto_alloc_scomp(const char *alg_name, u32 type,
+ u32 mask)
+{
+ return crypto_alloc_tfm(alg_name, &crypto_scomp_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_scomp);
+
+int crypto_register_scomp(struct scomp_alg *alg)
+{
+ struct crypto_alg *base = &alg->base;
+
+ base->cra_type = &crypto_scomp_type;
+ base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+ base->cra_flags |= CRYPTO_ALG_TYPE_SCOMPRESS;
+
+ return crypto_register_alg(base);
+}
+EXPORT_SYMBOL_GPL(crypto_register_scomp);
+
+int crypto_unregister_scomp(struct scomp_alg *alg)
+{
+ return crypto_unregister_alg(&alg->base);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_scomp);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Synchronous compression operations");
+MODULE_AUTHOR("LG Electronics Inc.");
+
diff --git a/include/crypto/compress.h b/include/crypto/compress.h
new file mode 100644
index 0000000..e4053fc
--- /dev/null
+++ b/include/crypto/compress.h
@@ -0,0 +1,93 @@
+#ifndef _CRYPTO_COMPRESS_H
+#define _CRYPTO_COMPRESS_H
+#include <linux/crypto.h>
+
+#define CRYPTO_SCOMP_DECOMP_NOCTX CRYPTO_ALG_PRIVATE
+
+struct crypto_scomp {
+ struct crypto_tfm base;
+};
+
+struct scomp_alg {
+ void *(*alloc_ctx)(struct crypto_scomp *tfm);
+ void (*free_ctx)(struct crypto_scomp *tfm, void *ctx);
+ int (*compress)(struct crypto_scomp *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen, void *ctx);
+ int (*decompress)(struct crypto_scomp *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen, void *ctx);
+
+ struct crypto_alg base;
+};
+
+extern struct crypto_scomp *crypto_alloc_scomp(const char *alg_name, u32 type,
+ u32 mask);
+
+static inline struct crypto_tfm *crypto_scomp_tfm(struct crypto_scomp *tfm)
+{
+ return &tfm->base;
+}
+
+static inline struct crypto_scomp *crypto_scomp_cast(struct crypto_tfm *tfm)
+{
+ return (struct crypto_scomp *)tfm;
+}
+
+static inline void crypto_free_scomp(struct crypto_scomp *tfm)
+{
+ crypto_destroy_tfm(tfm, crypto_scomp_tfm(tfm));
+}
+
+static inline int crypto_has_scomp(const char *alg_name, u32 type, u32 mask)
+{
+ type &= ~CRYPTO_ALG_TYPE_MASK;
+ type |= CRYPTO_ALG_TYPE_SCOMPRESS;
+ mask |= CRYPTO_ALG_TYPE_MASK;
+
+ return crypto_has_alg(alg_name, type, mask);
+}
+
+static inline struct scomp_alg *__crypto_scomp_alg(struct crypto_alg *alg)
+{
+ return container_of(alg, struct scomp_alg, base);
+}
+
+static inline struct scomp_alg *crypto_scomp_alg(struct crypto_scomp *tfm)
+{
+ return __crypto_scomp_alg(crypto_scomp_tfm(tfm)->__crt_alg);
+}
+
+static inline void *crypto_scomp_alloc_ctx(struct crypto_scomp *tfm)
+{
+ return crypto_scomp_alg(tfm)->alloc_ctx(tfm);
+}
+
+static inline void crypto_scomp_free_ctx(struct crypto_scomp *tfm,
+ void *ctx)
+{
+ return crypto_scomp_alg(tfm)->free_ctx(tfm, ctx);
+}
+
+static inline int crypto_scomp_compress(struct crypto_scomp *tfm,
+ const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
+{
+ return crypto_scomp_alg(tfm)->compress(tfm, src, slen, dst, dlen, ctx);
+}
+
+static inline int crypto_scomp_decompress(struct crypto_scomp *tfm,
+ const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
+{
+ return crypto_scomp_alg(tfm)->decompress(tfm, src, slen,
+ dst, dlen, ctx);
+}
+
+static inline bool crypto_scomp_decomp_noctx(struct crypto_scomp *tfm)
+{
+ return crypto_scomp_tfm(tfm)->__crt_alg->cra_flags &
+ CRYPTO_SCOMP_DECOMP_NOCTX;
+}
+
+extern int crypto_register_scomp(struct scomp_alg *alg);
+extern int crypto_unregister_scomp(struct scomp_alg *alg);
+#endif
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 96530a1..ba73c18 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -54,6 +54,7 @@
#define CRYPTO_ALG_TYPE_AHASH 0x0000000a
#define CRYPTO_ALG_TYPE_RNG 0x0000000c
#define CRYPTO_ALG_TYPE_AKCIPHER 0x0000000d
+#define CRYPTO_ALG_TYPE_SCOMPRESS 0x0000000e

#define CRYPTO_ALG_TYPE_HASH_MASK 0x0000000e
#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000c
--
1.9.1

2016-01-26 08:15:59

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 07/10] crypto/lz4hc: support new compression APIs

Now, new compression APIs are introduced and it has some benefits.
Let's support it.

Signed-off-by: Joonsoo Kim <[email protected]>
---
crypto/Kconfig | 1 +
crypto/lz4hc.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++++++-------
2 files changed, 82 insertions(+), 10 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 72ab0d7..2641a60 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1529,6 +1529,7 @@ config CRYPTO_LZ4HC
select CRYPTO_ALGAPI
select LZ4HC_COMPRESS
select LZ4_DECOMPRESS
+ select CRYPTO_COMPRESS2
help
This is the LZ4 high compression mode algorithm.

diff --git a/crypto/lz4hc.c b/crypto/lz4hc.c
index a1d3b5b..569a8a7 100644
--- a/crypto/lz4hc.c
+++ b/crypto/lz4hc.c
@@ -23,36 +23,53 @@
#include <linux/vmalloc.h>
#include <linux/lz4.h>

+#include <crypto/compress.h>
+
struct lz4hc_ctx {
void *lz4hc_comp_mem;
};

+static void *lz4hc_alloc_ctx(struct crypto_scomp *tfm)
+{
+ void *ctx;
+
+ ctx = vmalloc(LZ4HC_MEM_COMPRESS);
+ if (!ctx)
+ return ERR_PTR(-ENOMEM);
+
+ return ctx;
+}
+
static int lz4hc_init(struct crypto_tfm *tfm)
{
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);

- ctx->lz4hc_comp_mem = vmalloc(LZ4HC_MEM_COMPRESS);
- if (!ctx->lz4hc_comp_mem)
+ ctx->lz4hc_comp_mem = lz4hc_alloc_ctx(NULL);
+ if (IS_ERR(ctx->lz4hc_comp_mem))
return -ENOMEM;

return 0;
}

+static void lz4hc_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+ vfree(ctx);
+}
+
static void lz4hc_exit(struct crypto_tfm *tfm)
{
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);

- vfree(ctx->lz4hc_comp_mem);
+ lz4hc_free_ctx(NULL, ctx->lz4hc_comp_mem);
}

-static int lz4hc_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int __lz4hc_compress_crypto(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
{
- struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
size_t tmp_len = *dlen;
int err;

- err = lz4hc_compress(src, slen, dst, &tmp_len, ctx->lz4hc_comp_mem);
+ err = lz4hc_compress(src, slen, dst, &tmp_len, ctx);

if (err < 0)
return -EINVAL;
@@ -61,8 +78,23 @@ static int lz4hc_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
return 0;
}

-static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int lz4hc_scompress(struct crypto_scomp *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen, void *ctx)
+{
+ return __lz4hc_compress_crypto(src, slen, dst, dlen, ctx);
+}
+
+static int lz4hc_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+ struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ return __lz4hc_compress_crypto(src, slen, dst, dlen,
+ ctx->lz4hc_comp_mem);
+}
+
+static int __lz4hc_decompress_crypto(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
{
int err;
size_t tmp_len = *dlen;
@@ -76,6 +108,18 @@ static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
return err;
}

+static int lz4hc_sdecompress(struct crypto_scomp *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen, void *ctx)
+{
+ return __lz4hc_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
+static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+ return __lz4hc_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
static struct crypto_alg alg_lz4hc = {
.cra_name = "lz4hc",
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
@@ -89,14 +133,41 @@ static struct crypto_alg alg_lz4hc = {
.coa_decompress = lz4hc_decompress_crypto } }
};

+static struct scomp_alg scomp = {
+ .alloc_ctx = lz4hc_alloc_ctx,
+ .free_ctx = lz4hc_free_ctx,
+ .compress = lz4hc_scompress,
+ .decompress = lz4hc_sdecompress,
+ .base = {
+ .cra_name = "lz4hc",
+ .cra_driver_name= "lz4hc-scomp",
+ .cra_flags = CRYPTO_ALG_TYPE_SCOMPRESS |
+ CRYPTO_SCOMP_DECOMP_NOCTX,
+ .cra_module = THIS_MODULE,
+ }
+};
+
static int __init lz4hc_mod_init(void)
{
- return crypto_register_alg(&alg_lz4hc);
+ int ret;
+
+ ret = crypto_register_alg(&alg_lz4hc);
+ if (ret)
+ return ret;
+
+ ret = crypto_register_scomp(&scomp);
+ if (ret) {
+ crypto_unregister_alg(&alg_lz4hc);
+ return ret;
+ }
+
+ return ret;
}

static void __exit lz4hc_mod_fini(void)
{
crypto_unregister_alg(&alg_lz4hc);
+ crypto_unregister_scomp(&scomp);
}

module_init(lz4hc_mod_init);
--
1.9.1

2016-01-26 08:15:54

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 06/10] crypto/lz4: support new compression APIs

Now, new compression APIs are introduced and it has some benefits.
Let's support it.

Signed-off-by: Joonsoo Kim <[email protected]>
---
crypto/Kconfig | 1 +
crypto/lz4.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++++++-------
2 files changed, 82 insertions(+), 10 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index b4b485c..72ab0d7 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1520,6 +1520,7 @@ config CRYPTO_LZ4
select CRYPTO_ALGAPI
select LZ4_COMPRESS
select LZ4_DECOMPRESS
+ select CRYPTO_COMPRESS2
help
This is the LZ4 algorithm.

diff --git a/crypto/lz4.c b/crypto/lz4.c
index aefbcea..728c6c4 100644
--- a/crypto/lz4.c
+++ b/crypto/lz4.c
@@ -24,35 +24,53 @@
#include <linux/vmalloc.h>
#include <linux/lz4.h>

+#include <crypto/compress.h>
+
struct lz4_ctx {
void *lz4_comp_mem;
};

+static void *lz4_alloc_ctx(struct crypto_scomp *tfm)
+{
+ void *ctx;
+
+ ctx = vmalloc(LZ4_MEM_COMPRESS);
+ if (!ctx)
+ return ERR_PTR(-ENOMEM);
+
+ return ctx;
+}
+
static int lz4_init(struct crypto_tfm *tfm)
{
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);

- ctx->lz4_comp_mem = vmalloc(LZ4_MEM_COMPRESS);
- if (!ctx->lz4_comp_mem)
+ ctx->lz4_comp_mem = lz4_alloc_ctx(NULL);
+ if (IS_ERR(ctx->lz4_comp_mem))
return -ENOMEM;

return 0;
}

+static void lz4_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+ vfree(ctx);
+}
+
static void lz4_exit(struct crypto_tfm *tfm)
{
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
- vfree(ctx->lz4_comp_mem);
+
+ lz4_free_ctx(NULL, ctx->lz4_comp_mem);
}

-static int lz4_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int __lz4_compress_crypto(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
{
- struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
size_t tmp_len = *dlen;
int err;

- err = lz4_compress(src, slen, dst, &tmp_len, ctx->lz4_comp_mem);
+ err = lz4_compress(src, slen, dst, &tmp_len, ctx);

if (err < 0)
return -EINVAL;
@@ -61,8 +79,22 @@ static int lz4_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
return 0;
}

-static int lz4_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int lz4_scompress(struct crypto_scomp *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen, void *ctx)
+{
+ return __lz4_compress_crypto(src, slen, dst, dlen, ctx);
+}
+
+static int lz4_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+ struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ return __lz4_compress_crypto(src, slen, dst, dlen, ctx->lz4_comp_mem);
+}
+
+static int __lz4_decompress_crypto(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
{
int err;
size_t tmp_len = *dlen;
@@ -76,6 +108,18 @@ static int lz4_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
return err;
}

+static int lz4_sdecompress(struct crypto_scomp *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen, void *ctx)
+{
+ return __lz4_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
+static int lz4_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+ return __lz4_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
static struct crypto_alg alg_lz4 = {
.cra_name = "lz4",
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
@@ -89,14 +133,41 @@ static struct crypto_alg alg_lz4 = {
.coa_decompress = lz4_decompress_crypto } }
};

+static struct scomp_alg scomp = {
+ .alloc_ctx = lz4_alloc_ctx,
+ .free_ctx = lz4_free_ctx,
+ .compress = lz4_scompress,
+ .decompress = lz4_sdecompress,
+ .base = {
+ .cra_name = "lz4",
+ .cra_driver_name= "lz4-scomp",
+ .cra_flags = CRYPTO_ALG_TYPE_SCOMPRESS |
+ CRYPTO_SCOMP_DECOMP_NOCTX,
+ .cra_module = THIS_MODULE,
+ }
+};
+
static int __init lz4_mod_init(void)
{
- return crypto_register_alg(&alg_lz4);
+ int ret;
+
+ ret = crypto_register_alg(&alg_lz4);
+ if (ret)
+ return ret;
+
+ ret = crypto_register_scomp(&scomp);
+ if (ret) {
+ crypto_unregister_alg(&alg_lz4);
+ return ret;
+ }
+
+ return ret;
}

static void __exit lz4_mod_fini(void)
{
crypto_unregister_alg(&alg_lz4);
+ crypto_unregister_scomp(&scomp);
}

module_init(lz4_mod_init);
--
1.9.1

2016-01-26 08:15:11

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 09/10] crypto/deflate: support new compression APIs

Now, new compression APIs are introduced and it has some benefits.
Let's support it.

Signed-off-by: Joonsoo Kim <[email protected]>
---
crypto/Kconfig | 1 +
crypto/deflate.c | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++-----
2 files changed, 101 insertions(+), 10 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 351b859..728f88e 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1492,6 +1492,7 @@ config CRYPTO_DEFLATE
select CRYPTO_ALGAPI
select ZLIB_INFLATE
select ZLIB_DEFLATE
+ select CRYPTO_COMPRESS2
help
This is the Deflate algorithm (RFC1951), specified for use in
IPSec with the IPCOMP protocol (RFC3173, RFC2394).
diff --git a/crypto/deflate.c b/crypto/deflate.c
index 95d8d37..1f8f633 100644
--- a/crypto/deflate.c
+++ b/crypto/deflate.c
@@ -33,6 +33,8 @@
#include <linux/mm.h>
#include <linux/net.h>

+#include <crypto/compress.h>
+
#define DEFLATE_DEF_LEVEL Z_DEFAULT_COMPRESSION
#define DEFLATE_DEF_WINBITS 11
#define DEFLATE_DEF_MEMLEVEL MAX_MEM_LEVEL
@@ -101,9 +103,8 @@ static void deflate_decomp_exit(struct deflate_ctx *ctx)
vfree(ctx->decomp_stream.workspace);
}

-static int deflate_init(struct crypto_tfm *tfm)
+static int __deflate_init(void *ctx)
{
- struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
int ret;

ret = deflate_comp_init(ctx);
@@ -116,19 +117,54 @@ out:
return ret;
}

-static void deflate_exit(struct crypto_tfm *tfm)
+static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
+{
+ void *ctx;
+ int ret;
+
+ ctx = kzalloc(sizeof(struct deflate_ctx), GFP_KERNEL);
+ if (!ctx)
+ return ERR_PTR(-ENOMEM);
+
+ ret = __deflate_init(ctx);
+ if (ret) {
+ kfree(ctx);
+ return ERR_PTR(ret);
+ }
+
+ return ctx;
+}
+
+static int deflate_init(struct crypto_tfm *tfm)
{
struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);

+ return __deflate_init(ctx);
+}
+
+static void __deflate_exit(void *ctx)
+{
deflate_comp_exit(ctx);
deflate_decomp_exit(ctx);
}

-static int deflate_compress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static void deflate_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+ __deflate_exit(ctx);
+}
+
+static void deflate_exit(struct crypto_tfm *tfm)
+{
+ struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ __deflate_exit(ctx);
+}
+
+static int __deflate_compress(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
{
int ret = 0;
- struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+ struct deflate_ctx *dctx = ctx;
struct z_stream_s *stream = &dctx->comp_stream;

ret = zlib_deflateReset(stream);
@@ -153,12 +189,26 @@ out:
return ret;
}

-static int deflate_decompress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int deflate_compress(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+ struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+
+ return __deflate_compress(src, slen, dst, dlen, dctx);
+}
+
+static int deflate_scompress(struct crypto_scomp *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen, void *ctx)
+{
+ return __deflate_compress(src, slen, dst, dlen, ctx);
+}
+
+static int __deflate_decompress(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
{

int ret = 0;
- struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+ struct deflate_ctx *dctx = ctx;
struct z_stream_s *stream = &dctx->decomp_stream;

ret = zlib_inflateReset(stream);
@@ -194,6 +244,20 @@ out:
return ret;
}

+static int deflate_decompress(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+ struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+
+ return __deflate_decompress(src, slen, dst, dlen, dctx);
+}
+
+static int deflate_sdecompress(struct crypto_scomp *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen, void *ctx)
+{
+ return __deflate_decompress(src, slen, dst, dlen, ctx);
+}
+
static struct crypto_alg alg = {
.cra_name = "deflate",
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
@@ -206,14 +270,40 @@ static struct crypto_alg alg = {
.coa_decompress = deflate_decompress } }
};

+static struct scomp_alg scomp = {
+ .alloc_ctx = deflate_alloc_ctx,
+ .free_ctx = deflate_free_ctx,
+ .compress = deflate_scompress,
+ .decompress = deflate_sdecompress,
+ .base = {
+ .cra_name = "deflate",
+ .cra_driver_name= "deflate-scomp",
+ .cra_flags = CRYPTO_ALG_TYPE_SCOMPRESS,
+ .cra_module = THIS_MODULE,
+ }
+};
+
static int __init deflate_mod_init(void)
{
- return crypto_register_alg(&alg);
+ int ret;
+
+ ret = crypto_register_alg(&alg);
+ if (ret)
+ return ret;
+
+ ret = crypto_register_scomp(&scomp);
+ if (ret) {
+ crypto_unregister_alg(&alg);
+ return ret;
+ }
+
+ return ret;
}

static void __exit deflate_mod_fini(void)
{
crypto_unregister_alg(&alg);
+ crypto_unregister_scomp(&scomp);
}

module_init(deflate_mod_init);
--
1.9.1

2016-01-26 08:15:12

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 10/10] crypto/testmgr: add new compression APIs test

New compression APIs are supported now so we need test cases.
This patch implements it based on previous compression test framework.
Almost changes are straight forward.

Signed-off-by: Joonsoo Kim <[email protected]>
---
crypto/Kconfig | 1 +
crypto/testmgr.c | 227 ++++++++++++++++++++++++++++++++++++++++++++++++++++---
2 files changed, 216 insertions(+), 12 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 728f88e..4b9d796 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -118,6 +118,7 @@ config CRYPTO_MANAGER2
select CRYPTO_HASH2
select CRYPTO_BLKCIPHER2
select CRYPTO_AKCIPHER2
+ select CRYPTO_COMPRESS2

config CRYPTO_USER
tristate "Userspace cryptographic algorithm configuration"
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 086aa4d..d72ab6a 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -32,6 +32,7 @@
#include <crypto/rng.h>
#include <crypto/drbg.h>
#include <crypto/akcipher.h>
+#include <crypto/compress.h>

#include "internal.h"

@@ -1205,12 +1206,14 @@ static int test_skcipher(struct crypto_skcipher *tfm, int enc,
return 0;
}

-static int test_comp(struct crypto_comp *tfm, struct comp_testvec *ctemplate,
- struct comp_testvec *dtemplate, int ctcount, int dtcount)
+static int test_comp(struct crypto_tfm *tfm, void *ctx, int type,
+ struct comp_testvec *ctemplate, struct comp_testvec *dtemplate,
+ int ctcount, int dtcount)
{
- const char *algo = crypto_tfm_alg_driver_name(crypto_comp_tfm(tfm));
+ const char *algo = crypto_tfm_alg_driver_name(tfm);
unsigned int i;
char result[COMP_BUF_SIZE];
+ struct scatterlist src, dst;
int ret;

for (i = 0; i < ctcount; i++) {
@@ -1220,8 +1223,33 @@ static int test_comp(struct crypto_comp *tfm, struct comp_testvec *ctemplate,
memset(result, 0, sizeof (result));

ilen = ctemplate[i].inlen;
- ret = crypto_comp_compress(tfm, ctemplate[i].input,
- ilen, result, &dlen);
+
+ switch (type) {
+ case 0:
+ ret = crypto_comp_compress(crypto_comp_cast(tfm),
+ ctemplate[i].input, ilen,
+ result, &dlen);
+ break;
+
+ case 1:
+ ret = crypto_scomp_compress(crypto_scomp_cast(tfm),
+ ctemplate[i].input, ilen,
+ result, &dlen, ctx);
+ break;
+
+ case 2:
+ sg_init_one(&src, ctemplate[i].input, ilen);
+ sg_init_one(&dst, result, dlen);
+ acomp_request_set_comp(ctx, &src, &dst, ilen, dlen);
+ ret = crypto_acomp_compress(ctx);
+ dlen = ((struct acomp_req *)ctx)->out_len;
+ break;
+
+ default:
+ ret = 1;
+ break;
+ }
+
if (ret) {
printk(KERN_ERR "alg: comp: compression failed "
"on test %d for %s: ret=%d\n", i + 1, algo,
@@ -1253,8 +1281,32 @@ static int test_comp(struct crypto_comp *tfm, struct comp_testvec *ctemplate,
memset(result, 0, sizeof (result));

ilen = dtemplate[i].inlen;
- ret = crypto_comp_decompress(tfm, dtemplate[i].input,
- ilen, result, &dlen);
+ switch (type) {
+ case 0:
+ ret = crypto_comp_decompress(crypto_comp_cast(tfm),
+ dtemplate[i].input, ilen,
+ result, &dlen);
+ break;
+
+ case 1:
+ ret = crypto_scomp_decompress(crypto_scomp_cast(tfm),
+ dtemplate[i].input, ilen,
+ result, &dlen, ctx);
+ break;
+
+ case 2:
+ sg_init_one(&src, dtemplate[i].input, ilen);
+ sg_init_one(&dst, result, dlen);
+ acomp_request_set_comp(ctx, &src, &dst, ilen, dlen);
+ ret = crypto_acomp_decompress(ctx);
+ dlen = ((struct acomp_req *)ctx)->out_len;
+ break;
+
+ default:
+ ret = 1;
+ break;
+ }
+
if (ret) {
printk(KERN_ERR "alg: comp: decompression failed "
"on test %d for %s: ret=%d\n", i + 1, algo,
@@ -1446,7 +1498,8 @@ static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
return PTR_ERR(tfm);
}

- err = test_comp(tfm, desc->suite.comp.comp.vecs,
+ err = test_comp(crypto_comp_tfm(tfm), NULL, 0,
+ desc->suite.comp.comp.vecs,
desc->suite.comp.decomp.vecs,
desc->suite.comp.comp.count,
desc->suite.comp.decomp.count);
@@ -1455,6 +1508,92 @@ static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
return err;
}

+static int __alg_test_scomp(const struct alg_test_desc *desc,
+ const char *driver, u32 type, u32 mask)
+{
+ struct crypto_scomp *tfm;
+ void *ctx;
+ int err;
+
+ tfm = crypto_alloc_scomp(driver, type, mask);
+ if (IS_ERR(tfm)) {
+ printk(KERN_ERR "alg: scomp: Failed to load transform for %s: "
+ "%ld\n", driver, PTR_ERR(tfm));
+ return PTR_ERR(tfm);
+ }
+
+ ctx = crypto_scomp_alloc_ctx(tfm);
+ if (IS_ERR(ctx)) {
+ printk(KERN_ERR "alg: scomp: Failed to alloc context for %s: "
+ "%ld\n", driver, PTR_ERR(ctx));
+ err = PTR_ERR(ctx);
+ goto out;
+ }
+
+ err = test_comp(crypto_scomp_tfm(tfm), ctx, 1,
+ desc->suite.comp.comp.vecs,
+ desc->suite.comp.decomp.vecs,
+ desc->suite.comp.comp.count,
+ desc->suite.comp.decomp.count);
+
+ crypto_scomp_free_ctx(tfm, ctx);
+
+out:
+ crypto_free_scomp(tfm);
+ return err;
+}
+
+static int __alg_test_acomp(const struct alg_test_desc *desc,
+ const char *driver, u32 type, u32 mask)
+{
+ struct crypto_acomp *tfm;
+ struct acomp_req *req;
+ int err;
+
+ tfm = crypto_alloc_acomp(driver, type, mask);
+ if (IS_ERR(tfm)) {
+ printk(KERN_ERR "alg: acomp: Failed to load transform for %s: "
+ "%ld\n", driver, PTR_ERR(tfm));
+ return PTR_ERR(tfm);
+ }
+
+ req = acomp_request_alloc(tfm, GFP_KERNEL);
+ if (!req) {
+ printk(KERN_ERR "alg: acomp: Failed to alloc request for %s: ",
+ driver);
+ err = -ENOMEM;
+ goto out;
+ }
+
+ err = test_comp(crypto_acomp_tfm(tfm), req, 2,
+ desc->suite.comp.comp.vecs,
+ desc->suite.comp.decomp.vecs,
+ desc->suite.comp.comp.count,
+ desc->suite.comp.decomp.count);
+
+ acomp_request_free(req);
+
+out:
+ crypto_free_acomp(tfm);
+ return err;
+}
+
+static int alg_test_scomp(const struct alg_test_desc *desc, const char *driver,
+ u32 type, u32 mask)
+{
+ int err;
+
+ err = __alg_test_scomp(desc, driver, type, mask);
+ if (err)
+ return err;
+
+ err = __alg_test_acomp(desc, driver, type, mask);
+ if (err)
+ return err;
+
+ return err;
+}
+
static int alg_test_hash(const struct alg_test_desc *desc, const char *driver,
u32 type, u32 mask)
{
@@ -2520,7 +2659,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
- .alg = "deflate",
+ .alg = "deflate-generic",
.test = alg_test_comp,
.fips_allowed = 1,
.suite = {
@@ -2536,6 +2675,22 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
+ .alg = "deflate-scomp",
+ .test = alg_test_scomp,
+ .fips_allowed = 1,
+ .suite = {
+ .comp = {
+ .comp = {
+ .vecs = deflate_comp_tv_template,
+ .count = DEFLATE_COMP_TEST_VECTORS
+ },
+ .decomp = {
+ .vecs = deflate_decomp_tv_template,
+ .count = DEFLATE_DECOMP_TEST_VECTORS
+ }
+ }
+ }
+ }, {
.alg = "digest_null",
.test = alg_test_null,
}, {
@@ -3171,7 +3326,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
- .alg = "lz4",
+ .alg = "lz4-generic",
.test = alg_test_comp,
.fips_allowed = 1,
.suite = {
@@ -3187,7 +3342,23 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
- .alg = "lz4hc",
+ .alg = "lz4-scomp",
+ .test = alg_test_scomp,
+ .fips_allowed = 1,
+ .suite = {
+ .comp = {
+ .comp = {
+ .vecs = lz4_comp_tv_template,
+ .count = LZ4_COMP_TEST_VECTORS
+ },
+ .decomp = {
+ .vecs = lz4_decomp_tv_template,
+ .count = LZ4_DECOMP_TEST_VECTORS
+ }
+ }
+ }
+ }, {
+ .alg = "lz4hc-generic",
.test = alg_test_comp,
.fips_allowed = 1,
.suite = {
@@ -3203,7 +3374,23 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
- .alg = "lzo",
+ .alg = "lz4hc-scomp",
+ .test = alg_test_scomp,
+ .fips_allowed = 1,
+ .suite = {
+ .comp = {
+ .comp = {
+ .vecs = lz4hc_comp_tv_template,
+ .count = LZ4HC_COMP_TEST_VECTORS
+ },
+ .decomp = {
+ .vecs = lz4hc_decomp_tv_template,
+ .count = LZ4HC_DECOMP_TEST_VECTORS
+ }
+ }
+ }
+ }, {
+ .alg = "lzo-generic",
.test = alg_test_comp,
.fips_allowed = 1,
.suite = {
@@ -3219,6 +3406,22 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
+ .alg = "lzo-scomp",
+ .test = alg_test_scomp,
+ .fips_allowed = 1,
+ .suite = {
+ .comp = {
+ .comp = {
+ .vecs = lzo_comp_tv_template,
+ .count = LZO_COMP_TEST_VECTORS
+ },
+ .decomp = {
+ .vecs = lzo_decomp_tv_template,
+ .count = LZO_DECOMP_TEST_VECTORS
+ }
+ }
+ }
+ }, {
.alg = "md4",
.test = alg_test_hash,
.suite = {
--
1.9.1

2016-01-26 08:15:10

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 08/10] crypto/842: support new compression APIs

Now, new compression APIs are introduced and it has some benefits.
Let's support it.

Signed-off-by: Joonsoo Kim <[email protected]>
---
crypto/842.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++++++++++---
crypto/Kconfig | 1 +
2 files changed, 83 insertions(+), 3 deletions(-)

diff --git a/crypto/842.c b/crypto/842.c
index 98e387e..47cf7e5 100644
--- a/crypto/842.c
+++ b/crypto/842.c
@@ -32,10 +32,46 @@
#include <linux/crypto.h>
#include <linux/sw842.h>

+#include <crypto/compress.h>
+
struct crypto842_ctx {
- char wmem[SW842_MEM_COMPRESS]; /* working memory for compress */
+ void *wmem; /* working memory for compress */
};

+static void *crypto842_alloc_ctx(struct crypto_scomp *tfm)
+{
+ void *ctx;
+
+ ctx = kmalloc(SW842_MEM_COMPRESS, GFP_KERNEL);
+ if (!ctx)
+ return ERR_PTR(-ENOMEM);
+
+ return ctx;
+}
+
+static int crypto842_init(struct crypto_tfm *tfm)
+{
+ struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ ctx->wmem = crypto842_alloc_ctx(NULL);
+ if (IS_ERR(ctx->wmem))
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void crypto842_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+ kfree(ctx);
+}
+
+static void crypto842_exit(struct crypto_tfm *tfm)
+{
+ struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ crypto842_free_ctx(NULL, ctx->wmem);
+}
+
static int crypto842_compress(struct crypto_tfm *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen)
@@ -45,6 +81,13 @@ static int crypto842_compress(struct crypto_tfm *tfm,
return sw842_compress(src, slen, dst, dlen, ctx->wmem);
}

+static int crypto842_scompress(struct crypto_scomp *tfm,
+ const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
+{
+ return sw842_compress(src, slen, dst, dlen, ctx);
+}
+
static int crypto842_decompress(struct crypto_tfm *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen)
@@ -52,27 +95,63 @@ static int crypto842_decompress(struct crypto_tfm *tfm,
return sw842_decompress(src, slen, dst, dlen);
}

+static int crypto842_sdecompress(struct crypto_scomp *tfm,
+ const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
+{
+ return sw842_decompress(src, slen, dst, dlen);
+}
+
static struct crypto_alg alg = {
.cra_name = "842",
.cra_driver_name = "842-generic",
.cra_priority = 100,
.cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
- .cra_ctxsize = sizeof(struct crypto842_ctx),
.cra_module = THIS_MODULE,
+ .cra_init = crypto842_init,
+ .cra_exit = crypto842_exit,
.cra_u = { .compress = {
.coa_compress = crypto842_compress,
.coa_decompress = crypto842_decompress } }
};

+static struct scomp_alg scomp = {
+ .alloc_ctx = crypto842_alloc_ctx,
+ .free_ctx = crypto842_free_ctx,
+ .compress = crypto842_scompress,
+ .decompress = crypto842_sdecompress,
+ .base = {
+ .cra_name = "842",
+ .cra_driver_name= "842-scomp",
+ .cra_priority = 100,
+ .cra_flags = CRYPTO_ALG_TYPE_SCOMPRESS |
+ CRYPTO_SCOMP_DECOMP_NOCTX,
+ .cra_module = THIS_MODULE,
+ }
+};
+
static int __init crypto842_mod_init(void)
{
- return crypto_register_alg(&alg);
+ int ret;
+
+ ret = crypto_register_alg(&alg);
+ if (ret)
+ return ret;
+
+ ret = crypto_register_scomp(&scomp);
+ if (ret) {
+ crypto_unregister_alg(&alg);
+ return ret;
+ }
+
+ return ret;
}
module_init(crypto842_mod_init);

static void __exit crypto842_mod_exit(void)
{
crypto_unregister_alg(&alg);
+ crypto_unregister_scomp(&scomp);
}
module_exit(crypto842_mod_exit);

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 2641a60..351b859 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1512,6 +1512,7 @@ config CRYPTO_842
select CRYPTO_ALGAPI
select 842_COMPRESS
select 842_DECOMPRESS
+ select CRYPTO_COMPRESS2
help
This is the 842 algorithm.

--
1.9.1

2016-01-26 08:15:07

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 05/10] crypto/lzo: support new compression APIs

Now, new compression APIs are introduced and it has some benefits.
Let's support it.

Signed-off-by: Joonsoo Kim <[email protected]>
---
crypto/Kconfig | 1 +
crypto/lzo.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++++++--------
2 files changed, 83 insertions(+), 13 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index f22f4e9..b4b485c 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1503,6 +1503,7 @@ config CRYPTO_LZO
select CRYPTO_ALGAPI
select LZO_COMPRESS
select LZO_DECOMPRESS
+ select CRYPTO_COMPRESS2
help
This is the LZO algorithm.

diff --git a/crypto/lzo.c b/crypto/lzo.c
index 4b3e925..94cd7a4 100644
--- a/crypto/lzo.c
+++ b/crypto/lzo.c
@@ -23,39 +23,56 @@
#include <linux/mm.h>
#include <linux/lzo.h>

+#include <crypto/compress.h>
+
struct lzo_ctx {
void *lzo_comp_mem;
};

+static void *lzo_alloc_ctx(struct crypto_scomp *tfm)
+{
+ void *ctx;
+
+ ctx = kmalloc(LZO1X_MEM_COMPRESS,
+ GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT);
+ if (!ctx)
+ ctx = vmalloc(LZO1X_MEM_COMPRESS);
+ if (!ctx)
+ return ERR_PTR(-ENOMEM);
+
+ return ctx;
+}
+
static int lzo_init(struct crypto_tfm *tfm)
{
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);

- ctx->lzo_comp_mem = kmalloc(LZO1X_MEM_COMPRESS,
- GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT);
- if (!ctx->lzo_comp_mem)
- ctx->lzo_comp_mem = vmalloc(LZO1X_MEM_COMPRESS);
- if (!ctx->lzo_comp_mem)
+ ctx->lzo_comp_mem = lzo_alloc_ctx(NULL);
+ if (IS_ERR(ctx->lzo_comp_mem))
return -ENOMEM;

return 0;
}

+static void lzo_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+ kvfree(ctx);
+}
+
static void lzo_exit(struct crypto_tfm *tfm)
{
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);

- kvfree(ctx->lzo_comp_mem);
+ lzo_free_ctx(NULL, ctx->lzo_comp_mem);
}

-static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int __lzo_compress(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
{
- struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
int err;

- err = lzo1x_1_compress(src, slen, dst, &tmp_len, ctx->lzo_comp_mem);
+ err = lzo1x_1_compress(src, slen, dst, &tmp_len, ctx);

if (err != LZO_E_OK)
return -EINVAL;
@@ -64,8 +81,22 @@ static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
return 0;
}

-static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+ struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ return __lzo_compress(src, slen, dst, dlen, ctx->lzo_comp_mem);
+}
+
+static int lzo_scompress(struct crypto_scomp *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen, void *ctx)
+{
+ return __lzo_compress(src, slen, dst, dlen, ctx);
+}
+
+static int __lzo_decompress(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen)
{
int err;
size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
@@ -77,7 +108,18 @@ static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,

*dlen = tmp_len;
return 0;
+}
+
+static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+ return __lzo_decompress(src, slen, dst, dlen);
+}

+static int lzo_sdecompress(struct crypto_scomp *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen, void *ctx)
+{
+ return __lzo_decompress(src, slen, dst, dlen);
}

static struct crypto_alg alg = {
@@ -92,14 +134,41 @@ static struct crypto_alg alg = {
.coa_decompress = lzo_decompress } }
};

+static struct scomp_alg scomp = {
+ .alloc_ctx = lzo_alloc_ctx,
+ .free_ctx = lzo_free_ctx,
+ .compress = lzo_scompress,
+ .decompress = lzo_sdecompress,
+ .base = {
+ .cra_name = "lzo",
+ .cra_driver_name= "lzo-scomp",
+ .cra_flags = CRYPTO_ALG_TYPE_SCOMPRESS |
+ CRYPTO_SCOMP_DECOMP_NOCTX,
+ .cra_module = THIS_MODULE,
+ }
+};
+
static int __init lzo_mod_init(void)
{
- return crypto_register_alg(&alg);
+ int ret;
+
+ ret = crypto_register_alg(&alg);
+ if (ret)
+ return ret;
+
+ ret = crypto_register_scomp(&scomp);
+ if (ret) {
+ crypto_unregister_alg(&alg);
+ return ret;
+ }
+
+ return ret;
}

static void __exit lzo_mod_fini(void)
{
crypto_unregister_alg(&alg);
+ crypto_unregister_scomp(&scomp);
}

module_init(lzo_mod_init);
--
1.9.1

2016-01-26 08:15:35

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 02/10] crypto: add algorithm type specific flag, CRYPTO_ALG_PRIVATE

In following patch, new synchronous compression APIs will be
introduced and it needs one flags to determine whether context buffer is
needed or not for decompression. It can be implemented by flag in it's own
algorithm structure definition but because there is a room in general
crypto_alg flag, this patch reuses it to reduce complexity. It possibly
can be used for other algorithm type.

Signed-off-by: Joonsoo Kim <[email protected]>
---
include/linux/crypto.h | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index ab2a745..96530a1 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -101,6 +101,11 @@
#define CRYPTO_ALG_INTERNAL 0x00002000

/*
+ * Use this flag as algorithm type specific one.
+ */
+#define CRYPTO_ALG_PRIVATE 0x00004000
+
+/*
* Transform masks and values (for crt_flags).
*/
#define CRYPTO_TFM_REQ_MASK 0x000fff00
--
1.9.1

2016-01-27 07:41:59

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On Tue, Jan 26, 2016 at 05:15:06PM +0900, Joonsoo Kim wrote:
> From: Weigang Li <[email protected]>
>
> Now, asynchronous compression APIs are supported. There is no asynchronous
> compression driver now but this APIs can be used as front-end to
> synchronous compression algorithm. In this case, scatterlist would be
> linearlized when needed so it would cause some overhead.
>
> Signed-off-by: Weigang Li <[email protected]>
> Signed-off-by: Joonsoo Kim <[email protected]>

I think we should be able to use this for the synchronous case
too, like we do with skcipher and ahash.

The main difference that I can see right now is that acomp always
allocates a context through the request object while scomp does not.

This difference is entirely artificial as we could also make the
context conditional for acomp.

The reason we had the shash/ahash division is because the shash
interface offers a direct pointer interface while ahash is SG-based.
Otherwise ahash is just as able as shash to handle synchronous
requests.

At this point in time I don't see such a fundamental distinction
between acomp and scomp.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2016-01-27 07:59:08

by Li, Weigang

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On 1/27/2016 3:41 PM, Herbert Xu wrote:
> On Tue, Jan 26, 2016 at 05:15:06PM +0900, Joonsoo Kim wrote:
>> From: Weigang Li <[email protected]>
>>
>> Now, asynchronous compression APIs are supported. There is no asynchronous
>> compression driver now but this APIs can be used as front-end to
>> synchronous compression algorithm. In this case, scatterlist would be
>> linearlized when needed so it would cause some overhead.
>>
>> Signed-off-by: Weigang Li <[email protected]>
>> Signed-off-by: Joonsoo Kim <[email protected]>
>
> I think we should be able to use this for the synchronous case
> too, like we do with skcipher and ahash.
>
> The main difference that I can see right now is that acomp always
> allocates a context through the request object while scomp does not.
>
> This difference is entirely artificial as we could also make the
> context conditional for acomp.
>
> The reason we had the shash/ahash division is because the shash
> interface offers a direct pointer interface while ahash is SG-based.
> Otherwise ahash is just as able as shash to handle synchronous
> requests.
>
> At this point in time I don't see such a fundamental distinction
> between acomp and scomp.
>
> Cheers,
>
The acomp is also SG-based, while scomp only accepts flat buffer.

2016-01-27 08:04:14

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On Wed, Jan 27, 2016 at 03:59:05PM +0800, Li, Weigang wrote:
>
> The acomp is also SG-based, while scomp only accepts flat buffer.

Right, but do we need a pointer-based scomp at all? IPComp would
certainly be better off with an SG-based interface. Any other
users of compression are presumably dealing with large amounts
of data where an SG interface would make more sense.

A pointer interface makes sense for shash because you may be hashing
16 bytes at a time. Nobody sane is going to be compressing 16 bytes,
or are they?

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2016-01-27 08:09:38

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On Wed, Jan 27, 2016 at 04:03:55PM +0800, Herbert Xu wrote:
> On Wed, Jan 27, 2016 at 03:59:05PM +0800, Li, Weigang wrote:
> >
> > The acomp is also SG-based, while scomp only accepts flat buffer.
>
> Right, but do we need a pointer-based scomp at all? IPComp would
> certainly be better off with an SG-based interface. Any other
> users of compression are presumably dealing with large amounts
> of data where an SG interface would make more sense.
>
> A pointer interface makes sense for shash because you may be hashing
> 16 bytes at a time. Nobody sane is going to be compressing 16 bytes,
> or are they?

Note that I'm fine with keeping an scomp interface underneath
for those algorithms where the best way to handle SG input is
to linearise things. But I would prefer that this interface is
not exposed to kernel users unless it is absolutely required.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2016-01-27 08:26:21

by Li, Weigang

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On 1/27/2016 4:09 PM, Herbert Xu wrote:
> On Wed, Jan 27, 2016 at 04:03:55PM +0800, Herbert Xu wrote:
>> On Wed, Jan 27, 2016 at 03:59:05PM +0800, Li, Weigang wrote:
>>>
>>> The acomp is also SG-based, while scomp only accepts flat buffer.
>>
>> Right, but do we need a pointer-based scomp at all? IPComp would
>> certainly be better off with an SG-based interface. Any other
>> users of compression are presumably dealing with large amounts
>> of data where an SG interface would make more sense.
>>
>> A pointer interface makes sense for shash because you may be hashing
>> 16 bytes at a time. Nobody sane is going to be compressing 16 bytes,
>> or are they?
>
> Note that I'm fine with keeping an scomp interface underneath
> for those algorithms where the best way to handle SG input is
> to linearise things. But I would prefer that this interface is
> not exposed to kernel users unless it is absolutely required.
>
> Cheers,
>
Thanks for your comments, Herbert. I Agree, SG-list based compression
API makes more sense. Maybe Joonsoo can comment on this.

2016-01-27 14:17:18

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 01/10] crypto/compress: remove unused pcomp interface

On Tue, Jan 26, 2016 at 05:15:03PM +0900, Joonsoo Kim wrote:
> It is unused now, so remove it.
>
> Signed-off-by: Joonsoo Kim <[email protected]>

Applied.
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2016-01-28 03:50:47

by Joonsoo Kim

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

Hello, Herbert.

On Wed, Jan 27, 2016 at 04:09:26PM +0800, Herbert Xu wrote:
> On Wed, Jan 27, 2016 at 04:03:55PM +0800, Herbert Xu wrote:
> > On Wed, Jan 27, 2016 at 03:59:05PM +0800, Li, Weigang wrote:
> > >
> > > The acomp is also SG-based, while scomp only accepts flat buffer.
> >
> > Right, but do we need a pointer-based scomp at all? IPComp would
> > certainly be better off with an SG-based interface. Any other
> > users of compression are presumably dealing with large amounts
> > of data where an SG interface would make more sense.
> >
> > A pointer interface makes sense for shash because you may be hashing
> > 16 bytes at a time. Nobody sane is going to be compressing 16 bytes,
> > or are they?

Hmm... I'm not an expert on this area so below of my analysis would be
wrong.

Some of compression example in kernel do compression with PAGE_SIZE and
compressed size is naturally less than PAGE_SIZE. There are many cases
that compressed size is below than 100 bytes. To keep and handle data,
they somtimes use kmalloced buffer and I guess it isn't suitable for
SG-based interface. Is it okay to use SG-based interface
if kmalloced object covers two pages?

And, even, someone uses vmalloced buffer that's also not suitable for
SG-based interface. For large amount data case, vmalloced buffer is
more suitable and it needs pointer interface.

> Note that I'm fine with keeping an scomp interface underneath
> for those algorithms where the best way to handle SG input is
> to linearise things. But I would prefer that this interface is
> not exposed to kernel users unless it is absolutely required.

I have tested asynchronous compression APIs in zram and I saw
regression. Atomic allocation and setting up SG lists are culprit
for this regression. Moreover, zram optimizes linearisation
to get the best performance so it has two Kconfig options. One of
them cannot be support in the general layer. Not supporting pointer
based APIs unavoidably causes regression to zram in this case.

And, S/W compression algorithms that exists in kernel
are pointer based so it's natural to support it first
in crypto compression. That will help existing users to change
their direct library call to crypto compression without any regression.
They may be happy to change it because they just could get more
algorithm support without any loss.

I think that supporting pointer-based interface has some merits
mentioned above. However, I'm not sure what's the benefit if we only
support SG-based interface and it's bigger than above.

Thanks.

2016-01-29 10:09:31

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On Thu, Jan 28, 2016 at 12:19:42PM +0900, Joonsoo Kim wrote:
>
> I have tested asynchronous compression APIs in zram and I saw
> regression. Atomic allocation and setting up SG lists are culprit
> for this regression. Moreover, zram optimizes linearisation

So which is it, atomic allocations or setting up SG lists? There
is nothing in acomp that requires you to do an atomic allocation.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2016-02-01 02:11:24

by Joonsoo Kim

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On Fri, Jan 29, 2016 at 06:09:01PM +0800, Herbert Xu wrote:
> On Thu, Jan 28, 2016 at 12:19:42PM +0900, Joonsoo Kim wrote:
> >
> > I have tested asynchronous compression APIs in zram and I saw
> > regression. Atomic allocation and setting up SG lists are culprit
> > for this regression. Moreover, zram optimizes linearisation
>
> So which is it, atomic allocations or setting up SG lists? There
> is nothing in acomp that requires you to do an atomic allocation.

Atomic allocation are called for linearisation when needed. Zram's
compressed content is usually stored in two physically separate pages
so linearisation is needed. See scomp_map().

Setting up SG lists means that to use acomp, sg_init_table(),
sg_set_page() are need to be called by zram unlike the case just
passing the pointer based buffer.

Thanks.

2016-02-04 03:25:31

by Li, Weigang

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On 2/1/2016 10:11 AM, Joonsoo Kim wrote:
> On Fri, Jan 29, 2016 at 06:09:01PM +0800, Herbert Xu wrote:
>> On Thu, Jan 28, 2016 at 12:19:42PM +0900, Joonsoo Kim wrote:
>>>
>>> I have tested asynchronous compression APIs in zram and I saw
>>> regression. Atomic allocation and setting up SG lists are culprit
>>> for this regression. Moreover, zram optimizes linearisation
>>
>> So which is it, atomic allocations or setting up SG lists? There
>> is nothing in acomp that requires you to do an atomic allocation.
>
> Atomic allocation are called for linearisation when needed. Zram's
> compressed content is usually stored in two physically separate pages
> so linearisation is needed. See scomp_map().
>
> Setting up SG lists means that to use acomp, sg_init_table(),
> sg_set_page() are need to be called by zram unlike the case just
> passing the pointer based buffer.
>
> Thanks.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
Hello Herbert & Joonsoo,
Please can you advise how to get the acomp patch accepted?

2016-02-04 03:29:06

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On Thu, Feb 04, 2016 at 11:25:27AM +0800, Li, Weigang wrote:
>
> Please can you advise how to get the acomp patch accepted?

Can you do a posting of these patches without scomp so we can
evaluate the effects?

Thanks!
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2016-02-04 03:29:57

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On Thu, Feb 04, 2016 at 11:28:50AM +0800, Herbert Xu wrote:
> On Thu, Feb 04, 2016 at 11:25:27AM +0800, Li, Weigang wrote:
> >
> > Please can you advise how to get the acomp patch accepted?
>
> Can you do a posting of these patches without scomp so we can
> evaluate the effects?

Of course you can keep the driver-side scomp interface as otherwise
the implementation would be unnecessarily complicated.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2016-02-04 03:50:49

by Li, Weigang

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On 2/4/2016 11:29 AM, Herbert Xu wrote:
> On Thu, Feb 04, 2016 at 11:28:50AM +0800, Herbert Xu wrote:
>> On Thu, Feb 04, 2016 at 11:25:27AM +0800, Li, Weigang wrote:
>>>
>>> Please can you advise how to get the acomp patch accepted?
>>
>> Can you do a posting of these patches without scomp so we can
>> evaluate the effects?
>
> Of course you can keep the driver-side scomp interface as otherwise
> the implementation would be unnecessarily complicated.
>
> Cheers,
>
Seems I need go back to my first acomp patch.. Assuming we shall still
keep the comp i/f, and the linearisation of sg-list in acomp to fit the
"comp" API? What do you mean by the driver-side scomp? Thanks!

2016-02-04 07:17:42

by Joonsoo Kim

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

2016-02-04 12:28 GMT+09:00 Herbert Xu <[email protected]>:
> On Thu, Feb 04, 2016 at 11:25:27AM +0800, Li, Weigang wrote:
>>
>> Please can you advise how to get the acomp patch accepted?
>
> Can you do a posting of these patches without scomp so we can
> evaluate the effects?
>

Do you think not to merge scomp? Please let me know your overall
plan about this.?

Thanks.

2016-02-04 14:54:44

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On Thu, Feb 04, 2016 at 04:17:41PM +0900, Joonsoo Kim wrote:
>
> Do you think not to merge scomp? Please let me know your overall
> plan about this.?

I'm fine with a driver-side scomp interface. But I'd rather
avoid having yet another user-side compression interface in the
form of scomp if we can avoid it.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2016-02-04 14:56:44

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

On Thu, Feb 04, 2016 at 11:50:46AM +0800, Li, Weigang wrote:
>
> Seems I need go back to my first acomp patch.. Assuming we shall
> still keep the comp i/f, and the linearisation of sg-list in acomp
> to fit the "comp" API? What do you mean by the driver-side scomp?

What I mean is that the bottom half of the scomp patches can still
be used. We just want to hide it away from the users so won't
provide the direct entry points such as crypto_alloc_scomp, etc..

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2016-02-04 16:19:30

by Joonsoo Kim

[permalink] [raw]
Subject: Re: [PATCH v2 04/10] crypto/compress: add asynchronous compression support

2016-02-04 23:53 GMT+09:00 Herbert Xu <[email protected]>:
> On Thu, Feb 04, 2016 at 04:17:41PM +0900, Joonsoo Kim wrote:
>>
>> Do you think not to merge scomp? Please let me know your overall
>> plan about this.?
>
> I'm fine with a driver-side scomp interface. But I'd rather
> avoid having yet another user-side compression interface in the
> form of scomp if we can avoid it.

I mentioned that there are usecases that scomp is needed for performance,
it means that we can't avoid it. Or do you think this usecase differently?
I understand it's rather pain that we have two interfaces but scomp interface
is just wrapping layer to handle various S/W compression algorithms and
implementation is not that complex. I guess it doesn't cause much
maintenance cost.

Thanks.