2016-03-06 01:25:15

by Tadeusz Struk

[permalink] [raw]
Subject: [PATCH 0/3] crypto: af_alg - add TLS type encryption

Hi,
The following series adds TLS type authentication. To do this a new
template, encauth, is introduced. It is derived from the existing authenc
template and modified to work in "first auth then encrypt" mode.
The algif interface is also changed to work with the new authentication type.

---

Tadeusz Struk (3):
crypto: authenc - add TLS type encryption
crypto: af_alg - add AEAD operation type
crypto: algif_aead - modify algif aead interface to work with encauth


crypto/Makefile | 2
crypto/af_alg.c | 6 +
crypto/algif_aead.c | 93 +++++++-
crypto/encauth.c | 510 +++++++++++++++++++++++++++++++++++++++++++
include/crypto/if_alg.h | 1
include/uapi/linux/if_alg.h | 4
6 files changed, 601 insertions(+), 15 deletions(-)
create mode 100644 crypto/encauth.c

--


2016-03-06 01:25:22

by Tadeusz Struk

[permalink] [raw]
Subject: [PATCH 1/3] crypto: authenc - add TLS type encryption

This patch adds a new authentication mode for TLS type encryption.
During encrypt it generates auth data + padding and then the
plaintext || authdata || padding is encrypted.
This requires the user to provide extra space for the cipher text.
The required space can be calculated as
outlen = assoc len + plaintext len + hash size + cipher block size
On decrypt first the whole buffer is decrypted, and then
verification of the authdata and padding is performed.

Signed-off-by: Tadeusz Struk <[email protected]>
---
crypto/Makefile | 2
crypto/encauth.c | 510 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 511 insertions(+), 1 deletion(-)
create mode 100644 crypto/encauth.c

diff --git a/crypto/Makefile b/crypto/Makefile
index 4f4ef7e..a372335 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -103,7 +103,7 @@ obj-$(CONFIG_CRYPTO_MICHAEL_MIC) += michael_mic.o
obj-$(CONFIG_CRYPTO_CRC32C) += crc32c_generic.o
obj-$(CONFIG_CRYPTO_CRC32) += crc32_generic.o
obj-$(CONFIG_CRYPTO_CRCT10DIF) += crct10dif_common.o crct10dif_generic.o
-obj-$(CONFIG_CRYPTO_AUTHENC) += authenc.o authencesn.o
+obj-$(CONFIG_CRYPTO_AUTHENC) += authenc.o authencesn.o encauth.o
obj-$(CONFIG_CRYPTO_LZO) += lzo.o
obj-$(CONFIG_CRYPTO_LZ4) += lz4.o
obj-$(CONFIG_CRYPTO_LZ4HC) += lz4hc.o
diff --git a/crypto/encauth.c b/crypto/encauth.c
new file mode 100644
index 0000000..3c0ee1a
--- /dev/null
+++ b/crypto/encauth.c
@@ -0,0 +1,510 @@
+/*
+ * Encauth: Simple AEAD wrapper for TLS.
+ * Derived from authenc.c
+ *
+ * Copyright (c) 2016 Intel Corp.
+ *
+ * Author: Tadeusz Struk <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <crypto/internal/aead.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/authenc.h>
+#include <crypto/null.h>
+#include <crypto/scatterwalk.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/rtnetlink.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+struct encauth_instance_ctx {
+ struct crypto_ahash_spawn auth;
+ struct crypto_skcipher_spawn enc;
+ unsigned int reqoff;
+};
+
+struct crypto_encauth_ctx {
+ struct crypto_ahash *auth;
+ struct crypto_ablkcipher *enc;
+ struct crypto_blkcipher *null;
+};
+
+struct encauth_request_ctx {
+ struct scatterlist src[2];
+ struct scatterlist dst[2];
+ int padd_err;
+ u8 paddlen;
+ char tail[];
+};
+
+static void encauth_request_complete(struct aead_request *req, int err)
+{
+ if (err != -EINPROGRESS)
+ aead_request_complete(req, err);
+}
+
+static int crypto_encauth_setkey(struct crypto_aead *encauth, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_encauth_ctx *ctx = crypto_aead_ctx(encauth);
+ struct crypto_ahash *auth = ctx->auth;
+ struct crypto_ablkcipher *enc = ctx->enc;
+ struct crypto_authenc_keys keys;
+ int err = -EINVAL;
+
+ if (crypto_authenc_extractkeys(&keys, key, keylen) != 0)
+ goto badkey;
+
+ crypto_ahash_clear_flags(auth, CRYPTO_TFM_REQ_MASK);
+ crypto_ahash_set_flags(auth, crypto_aead_get_flags(encauth) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_ahash_setkey(auth, keys.authkey, keys.authkeylen);
+ crypto_aead_set_flags(encauth, crypto_ahash_get_flags(auth) &
+ CRYPTO_TFM_RES_MASK);
+
+ if (err)
+ goto out;
+
+ crypto_ablkcipher_clear_flags(enc, CRYPTO_TFM_REQ_MASK);
+ crypto_ablkcipher_set_flags(enc, crypto_aead_get_flags(encauth) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_ablkcipher_setkey(enc, keys.enckey, keys.enckeylen);
+ crypto_aead_set_flags(encauth, crypto_ablkcipher_get_flags(enc) &
+ CRYPTO_TFM_RES_MASK);
+
+out:
+ return err;
+
+badkey:
+ crypto_aead_set_flags(encauth, CRYPTO_TFM_RES_BAD_KEY_LEN);
+ goto out;
+}
+
+static int crypto_encauth_copy_assoc(struct aead_request *req)
+{
+ struct crypto_aead *encauth = crypto_aead_reqtfm(req);
+ struct crypto_encauth_ctx *ctx = crypto_aead_ctx(encauth);
+ struct blkcipher_desc desc = {
+ .tfm = ctx->null,
+ };
+
+ return crypto_blkcipher_encrypt(&desc, req->dst, req->src,
+ req->assoclen);
+}
+
+static void encauth_encrypt_done(struct crypto_async_request *req, int err)
+{
+ struct aead_request *areq = req->data;
+
+ encauth_request_complete(areq, err);
+}
+
+static int crypto_encauth_encrypt(struct aead_request *req)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct aead_instance *inst = aead_alg_instance(tfm);
+ struct crypto_encauth_ctx *ctx = crypto_aead_ctx(tfm);
+ struct encauth_instance_ctx *ictx = aead_instance_ctx(inst);
+ struct encauth_request_ctx *areq_ctx = aead_request_ctx(req);
+ struct crypto_ablkcipher *enc = ctx->enc;
+ struct ablkcipher_request *abreq = (void *)(areq_ctx->tail +
+ ictx->reqoff);
+ struct scatterlist *src, *dst;
+ int err;
+
+ sg_init_table(areq_ctx->src, 2);
+ src = scatterwalk_ffwd(areq_ctx->src, req->src, req->assoclen);
+ dst = src;
+
+ if (req->src != req->dst) {
+ err = crypto_encauth_copy_assoc(req);
+ if (err)
+ return err;
+
+ sg_init_table(areq_ctx->dst, 2);
+ dst = scatterwalk_ffwd(areq_ctx->dst, req->dst, req->assoclen);
+ }
+ ablkcipher_request_set_tfm(abreq, enc);
+ ablkcipher_request_set_callback(abreq, aead_request_flags(req),
+ encauth_encrypt_done, req);
+ ablkcipher_request_set_crypt(abreq, src, dst, req->cryptlen +
+ crypto_aead_authsize(tfm) +
+ areq_ctx->paddlen, req->iv);
+ return crypto_ablkcipher_encrypt(abreq);
+}
+
+static void encauth_geniv_ahash_done(struct crypto_async_request *areq, int err)
+{
+ struct aead_request *req = areq->data;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct aead_instance *inst = aead_alg_instance(tfm);
+ struct encauth_instance_ctx *ictx = aead_instance_ctx(inst);
+ struct encauth_request_ctx *areq_ctx = aead_request_ctx(req);
+ struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
+
+ if (err)
+ goto out;
+
+ scatterwalk_map_and_copy(ahreq->result, req->dst,
+ req->assoclen + req->cryptlen,
+ crypto_aead_authsize(tfm), 1);
+ err = crypto_encauth_encrypt(req);
+out:
+ encauth_request_complete(req, err);
+}
+
+static int crypto_encauth_genicv_encrypt(struct aead_request *req)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct aead_instance *inst = aead_alg_instance(tfm);
+ struct crypto_encauth_ctx *ctx = crypto_aead_ctx(tfm);
+ struct encauth_instance_ctx *ictx = aead_instance_ctx(inst);
+ struct crypto_ahash *auth = ctx->auth;
+ struct encauth_request_ctx *areq_ctx = aead_request_ctx(req);
+ struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
+ u8 paddlen, *hash = areq_ctx->tail;
+ const unsigned int bs = crypto_aead_blocksize(tfm);
+ unsigned int as = crypto_aead_authsize(tfm);
+ u8 padd[bs];
+ int err;
+
+ hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth),
+ crypto_ahash_alignmask(auth) + 1);
+
+ /* apply padding */
+ paddlen = bs - ((req->cryptlen + as) % bs);
+ memset(padd, paddlen - 1, paddlen);
+ if (sg_copy_buffer(req->src, sg_nents(req->src), padd, paddlen,
+ req->cryptlen + req->assoclen + as, 0) != paddlen)
+ return -EINVAL;
+
+ areq_ctx->paddlen = paddlen;
+ ahash_request_set_tfm(ahreq, auth);
+ ahash_request_set_crypt(ahreq, req->src, hash,
+ req->assoclen + req->cryptlen);
+ ahash_request_set_callback(ahreq, aead_request_flags(req),
+ encauth_geniv_ahash_done, req);
+ err = crypto_ahash_digest(ahreq);
+ if (err)
+ return err;
+
+ scatterwalk_map_and_copy(hash, req->src, req->assoclen + req->cryptlen,
+ crypto_aead_authsize(tfm), 1);
+ return crypto_encauth_encrypt(req);
+}
+
+static void encauth_dgst_verify_done(struct crypto_async_request *req, int err)
+{
+ struct aead_request *areq = req->data;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(areq);
+ unsigned int authsize = crypto_aead_authsize(tfm);
+ struct ahash_request *ahreq = (void *)req;
+ struct encauth_request_ctx *areq_ctx = aead_request_ctx(areq);
+ u8 *ihash = ahreq->result + authsize;
+
+ scatterwalk_map_and_copy(ihash, areq->dst, ahreq->nbytes, authsize, 0);
+
+ if (crypto_memneq(ihash, ahreq->result, authsize) || areq_ctx->padd_err)
+ err = -EBADMSG;
+
+ encauth_request_complete(areq, err);
+}
+
+static int crypto_encauth_dgst_verify(struct aead_request *req,
+ unsigned int flags)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ unsigned int authsize = crypto_aead_authsize(tfm);
+ struct aead_instance *inst = aead_alg_instance(tfm);
+ struct crypto_encauth_ctx *ctx = crypto_aead_ctx(tfm);
+ struct encauth_instance_ctx *ictx = aead_instance_ctx(inst);
+ struct crypto_ahash *auth = ctx->auth;
+ struct encauth_request_ctx *areq_ctx = aead_request_ctx(req);
+ struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
+ u8 *hash = areq_ctx->tail;
+ int i, err = 0, padd_err = 0;
+ u8 paddlen, *ihash;
+ u8 padd[255];
+
+ scatterwalk_map_and_copy(&paddlen, req->dst, req->assoclen +
+ req->cryptlen - 1, 1, 0);
+
+ if (paddlen > 255 || paddlen > req->cryptlen) {
+ paddlen = 1;
+ padd_err = -EBADMSG;
+ }
+
+ scatterwalk_map_and_copy(padd, req->dst, req->assoclen +
+ req->cryptlen - paddlen, paddlen, 0);
+
+ for (i = 0; i < paddlen; i++) {
+ if (padd[i] != paddlen)
+ padd_err = -EBADMSG;
+ }
+
+ areq_ctx->padd_err = padd_err;
+
+ hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth),
+ crypto_ahash_alignmask(auth) + 1);
+
+ ahash_request_set_tfm(ahreq, auth);
+ ahash_request_set_crypt(ahreq, req->dst, hash,
+ req->assoclen + req->cryptlen -
+ authsize - paddlen - 1);
+ ahash_request_set_callback(ahreq, aead_request_flags(req),
+ encauth_dgst_verify_done, req);
+ err = crypto_ahash_digest(ahreq);
+ if (err)
+ return err;
+
+ ihash = ahreq->result + authsize;
+ scatterwalk_map_and_copy(ihash, req->dst, ahreq->nbytes, authsize, 0);
+ if (crypto_memneq(ihash, ahreq->result, authsize) || padd_err)
+ err = -EBADMSG;
+
+ return err;
+}
+
+static void encauth_decrypt_done(struct crypto_async_request *areq, int err)
+{
+ struct aead_request *req = areq->data;
+
+ if (err)
+ goto out;
+
+ err = crypto_encauth_dgst_verify(req, aead_request_flags(req));
+out:
+ encauth_request_complete(req, err);
+}
+
+static int crypto_encauth_decrypt(struct aead_request *req)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct aead_instance *inst = aead_alg_instance(tfm);
+ struct crypto_encauth_ctx *ctx = crypto_aead_ctx(tfm);
+ struct encauth_instance_ctx *ictx = aead_instance_ctx(inst);
+ struct encauth_request_ctx *areq_ctx = aead_request_ctx(req);
+ struct ablkcipher_request *abreq = (void *)(areq_ctx->tail +
+ ictx->reqoff);
+ struct scatterlist *src, *dst;
+ int err;
+
+ sg_init_table(areq_ctx->src, 2);
+ src = scatterwalk_ffwd(areq_ctx->src, req->src, req->assoclen);
+ dst = src;
+
+ if (req->src != req->dst) {
+ err = crypto_encauth_copy_assoc(req);
+ if (err)
+ return err;
+
+ sg_init_table(areq_ctx->dst, 2);
+ dst = scatterwalk_ffwd(areq_ctx->dst, req->dst, req->assoclen);
+ }
+ ablkcipher_request_set_tfm(abreq, ctx->enc);
+ ablkcipher_request_set_callback(abreq, aead_request_flags(req),
+ encauth_decrypt_done, req);
+ ablkcipher_request_set_crypt(abreq, src, dst, req->cryptlen, req->iv);
+ err = crypto_ablkcipher_decrypt(abreq);
+ if (err)
+ return err;
+
+ return crypto_encauth_dgst_verify(req, aead_request_flags(req));
+}
+
+static int crypto_encauth_init_tfm(struct crypto_aead *tfm)
+{
+ struct aead_instance *inst = aead_alg_instance(tfm);
+ struct encauth_instance_ctx *ictx = aead_instance_ctx(inst);
+ struct crypto_encauth_ctx *ctx = crypto_aead_ctx(tfm);
+ struct crypto_ahash *auth;
+ struct crypto_ablkcipher *enc;
+ struct crypto_blkcipher *null;
+ int err;
+
+ auth = crypto_spawn_ahash(&ictx->auth);
+ if (IS_ERR(auth))
+ return PTR_ERR(auth);
+
+ enc = crypto_spawn_skcipher(&ictx->enc);
+ err = PTR_ERR(enc);
+ if (IS_ERR(enc))
+ goto err_free_ahash;
+
+ null = crypto_get_default_null_skcipher();
+ err = PTR_ERR(null);
+ if (IS_ERR(null))
+ goto err_free_skcipher;
+
+ ctx->auth = auth;
+ ctx->enc = enc;
+ ctx->null = null;
+
+ crypto_aead_set_reqsize(tfm, sizeof(struct encauth_request_ctx) +
+ ictx->reqoff +
+ max_t(unsigned int, crypto_ahash_reqsize(auth) +
+ sizeof(struct ahash_request),
+ sizeof(struct ablkcipher_request) +
+ crypto_ablkcipher_reqsize(enc)));
+ return 0;
+
+err_free_skcipher:
+ crypto_free_ablkcipher(enc);
+err_free_ahash:
+ crypto_free_ahash(auth);
+ return err;
+}
+
+static void crypto_encauth_exit_tfm(struct crypto_aead *tfm)
+{
+ struct crypto_encauth_ctx *ctx = crypto_aead_ctx(tfm);
+
+ crypto_free_ahash(ctx->auth);
+ crypto_free_ablkcipher(ctx->enc);
+ crypto_put_default_null_skcipher();
+}
+
+static void crypto_encauth_free(struct aead_instance *inst)
+{
+ struct encauth_instance_ctx *ctx = aead_instance_ctx(inst);
+
+ crypto_drop_skcipher(&ctx->enc);
+ crypto_drop_ahash(&ctx->auth);
+ kfree(inst);
+}
+
+static int crypto_encauth_create(struct crypto_template *tmpl,
+ struct rtattr **tb)
+{
+ struct crypto_attr_type *algt;
+ struct aead_instance *inst;
+ struct hash_alg_common *auth;
+ struct crypto_alg *auth_base;
+ struct crypto_alg *enc;
+ struct encauth_instance_ctx *ctx;
+ const char *enc_name;
+ int err;
+
+ algt = crypto_get_attr_type(tb);
+ if (IS_ERR(algt))
+ return PTR_ERR(algt);
+
+ if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
+ return -EINVAL;
+
+ auth = ahash_attr_alg(tb[1], CRYPTO_ALG_TYPE_HASH,
+ CRYPTO_ALG_TYPE_AHASH_MASK);
+ if (IS_ERR(auth))
+ return PTR_ERR(auth);
+
+ auth_base = &auth->base;
+
+ enc_name = crypto_attr_alg_name(tb[2]);
+ err = PTR_ERR(enc_name);
+ if (IS_ERR(enc_name))
+ goto out_put_auth;
+
+ inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
+ err = -ENOMEM;
+ if (!inst)
+ goto out_put_auth;
+
+ ctx = aead_instance_ctx(inst);
+
+ err = crypto_init_ahash_spawn(&ctx->auth, auth,
+ aead_crypto_instance(inst));
+ if (err)
+ goto err_free_inst;
+
+ crypto_set_skcipher_spawn(&ctx->enc, aead_crypto_instance(inst));
+ err = crypto_grab_skcipher(&ctx->enc, enc_name, 0,
+ crypto_requires_sync(algt->type,
+ algt->mask));
+ if (err)
+ goto err_drop_auth;
+
+ enc = crypto_skcipher_spawn_alg(&ctx->enc);
+
+ ctx->reqoff = ALIGN(2 * auth->digestsize + auth_base->cra_alignmask,
+ auth_base->cra_alignmask + 1);
+
+ err = -ENAMETOOLONG;
+ if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
+ "encauth(%s,%s)", auth_base->cra_name, enc->cra_name) >=
+ CRYPTO_MAX_ALG_NAME)
+ goto err_drop_enc;
+
+ if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ "encauth(%s,%s)", auth_base->cra_driver_name,
+ enc->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+ goto err_drop_enc;
+
+ inst->alg.base.cra_flags = enc->cra_flags & CRYPTO_ALG_ASYNC;
+ inst->alg.base.cra_priority = enc->cra_priority * 10 +
+ auth_base->cra_priority;
+ inst->alg.base.cra_blocksize = enc->cra_blocksize;
+ inst->alg.base.cra_alignmask = auth_base->cra_alignmask |
+ enc->cra_alignmask;
+ inst->alg.base.cra_ctxsize = sizeof(struct crypto_encauth_ctx);
+
+ inst->alg.ivsize = enc->cra_ablkcipher.ivsize;
+ inst->alg.maxauthsize = auth->digestsize;
+
+ inst->alg.init = crypto_encauth_init_tfm;
+ inst->alg.exit = crypto_encauth_exit_tfm;
+
+ inst->alg.setkey = crypto_encauth_setkey;
+ inst->alg.encrypt = crypto_encauth_genicv_encrypt;
+ inst->alg.decrypt = crypto_encauth_decrypt;
+
+ inst->free = crypto_encauth_free;
+
+ err = aead_register_instance(tmpl, inst);
+ if (err)
+ goto err_drop_enc;
+
+out:
+ crypto_mod_put(auth_base);
+ return err;
+
+err_drop_enc:
+ crypto_drop_skcipher(&ctx->enc);
+err_drop_auth:
+ crypto_drop_ahash(&ctx->auth);
+err_free_inst:
+ kfree(inst);
+out_put_auth:
+ goto out;
+}
+
+static struct crypto_template crypto_encauth_tmpl = {
+ .name = "encauth",
+ .create = crypto_encauth_create,
+ .module = THIS_MODULE,
+};
+
+static int __init crypto_encauth_module_init(void)
+{
+ return crypto_register_template(&crypto_encauth_tmpl);
+}
+
+static void __exit crypto_encauth_module_exit(void)
+{
+ crypto_unregister_template(&crypto_encauth_tmpl);
+}
+
+module_init(crypto_encauth_module_init);
+module_exit(crypto_encauth_module_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Simple AEAD wrapper for TLS");
+MODULE_ALIAS_CRYPTO("encauth");

2016-03-06 01:25:27

by Tadeusz Struk

[permalink] [raw]
Subject: [PATCH 2/3] crypto: af_alg - add AEAD operation type

We need to allow the user to set the authentication type.
This adds a new operation that sets IPSec or TLS authentication mode.

Signed-off-by: Tadeusz Struk <[email protected]>
---
crypto/af_alg.c | 6 ++++++
include/crypto/if_alg.h | 1 +
include/uapi/linux/if_alg.h | 4 ++++
3 files changed, 11 insertions(+)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index f5e18c2..cc5f2b0 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -464,6 +464,12 @@ int af_alg_cmsg_send(struct msghdr *msg, struct af_alg_control *con)
con->op = *(u32 *)CMSG_DATA(cmsg);
break;

+ case ALG_SET_AEAD_TYPE:
+ if (cmsg->cmsg_len < CMSG_LEN(sizeof(u32)))
+ return -EINVAL;
+ con->op_type = *(u32 *)CMSG_DATA(cmsg);
+ break;
+
case ALG_SET_AEAD_ASSOCLEN:
if (cmsg->cmsg_len < CMSG_LEN(sizeof(u32)))
return -EINVAL;
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index a2bfd78..d76ea0c 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -45,6 +45,7 @@ struct af_alg_completion {
struct af_alg_control {
struct af_alg_iv *iv;
int op;
+ int op_type;
unsigned int aead_assoclen;
};

diff --git a/include/uapi/linux/if_alg.h b/include/uapi/linux/if_alg.h
index f2acd2f..cef00de 100644
--- a/include/uapi/linux/if_alg.h
+++ b/include/uapi/linux/if_alg.h
@@ -34,9 +34,13 @@ struct af_alg_iv {
#define ALG_SET_OP 3
#define ALG_SET_AEAD_ASSOCLEN 4
#define ALG_SET_AEAD_AUTHSIZE 5
+#define ALG_SET_AEAD_TYPE 6

/* Operations */
#define ALG_OP_DECRYPT 0
#define ALG_OP_ENCRYPT 1

+/* AEAD operation type */
+#define ALG_AEAD_IPSEC 0 /* First encrypt then authenticate */
+#define ALG_AEAD_TLS 1 /* First authenticate then encrypt */
#endif /* _LINUX_IF_ALG_H */

2016-03-06 01:25:35

by Tadeusz Struk

[permalink] [raw]
Subject: [PATCH 3/3] crypto: algif_aead - modify algif aead interface to work with encauth

Updates to algif_aead to allow it to work with the new TLS authentication
mode. This patch is generated on top of the algif_aead async patch:
https://patchwork.kernel.org/patch/8182971/

Signed-off-by: Tadeusz Struk <[email protected]>
---
crypto/algif_aead.c | 93 +++++++++++++++++++++++++++++++++++++++++++--------
1 file changed, 79 insertions(+), 14 deletions(-)

diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 47d4f71..2d53054 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -26,7 +26,7 @@

struct aead_sg_list {
unsigned int cur;
- struct scatterlist sg[ALG_MAX_PAGES];
+ struct scatterlist sg[ALG_MAX_PAGES + 1];
};

struct aead_async_rsgl {
@@ -40,6 +40,7 @@ struct aead_async_req {
struct list_head list;
struct kiocb *iocb;
unsigned int tsgls;
+ bool padded;
char iv[];
};

@@ -49,6 +50,7 @@ struct aead_ctx {
struct list_head list;

void *iv;
+ void *padd;

struct af_alg_completion completion;

@@ -58,6 +60,7 @@ struct aead_ctx {
bool more;
bool merge;
bool enc;
+ bool type;

size_t aead_assoclen;
struct aead_request aead_req;
@@ -88,7 +91,7 @@ static void aead_reset_ctx(struct aead_ctx *ctx)
{
struct aead_sg_list *sgl = &ctx->tsgl;

- sg_init_table(sgl->sg, ALG_MAX_PAGES);
+ sg_init_table(sgl->sg, ALG_MAX_PAGES + 1);
sgl->cur = 0;
ctx->used = 0;
ctx->more = 0;
@@ -191,6 +194,7 @@ static int aead_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
struct af_alg_control con = {};
long copied = 0;
bool enc = 0;
+ bool type = 0;
bool init = 0;
int err = -EINVAL;

@@ -211,6 +215,15 @@ static int aead_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
return -EINVAL;
}

+ switch (con.op_type) {
+ case ALG_AEAD_IPSEC:
+ case ALG_AEAD_TLS:
+ type = con.op_type;
+ break;
+ default:
+ return -EINVAL;
+ }
+
if (con.iv && con.iv->ivlen != ivsize)
return -EINVAL;
}
@@ -221,6 +234,7 @@ static int aead_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)

if (init) {
ctx->enc = enc;
+ ctx->type = type;
if (con.iv)
memcpy(ctx->iv, con.iv->iv, ivsize);

@@ -399,7 +413,8 @@ static void aead_async_cb(struct crypto_async_request *_req, int err)
for (i = 0; i < areq->tsgls; i++)
put_page(sg_page(sg + i));

- sock_kfree_s(sk, areq->tsgl, sizeof(*areq->tsgl) * areq->tsgls);
+ sock_kfree_s(sk, areq->tsgl, sizeof(*areq->tsgl) *
+ (areq->tsgls + areq->padded));
sock_kfree_s(sk, req, reqlen);
__sock_put(sk);
iocb->ki_complete(iocb, err, err);
@@ -417,11 +432,14 @@ static int aead_recvmsg_async(struct socket *sock, struct msghdr *msg,
struct aead_sg_list *sgl = &ctx->tsgl;
struct aead_async_rsgl *last_rsgl = NULL, *rsgl;
unsigned int as = crypto_aead_authsize(tfm);
+ unsigned int bs = crypto_aead_blocksize(tfm);
unsigned int i, reqlen = GET_REQ_SIZE(tfm);
int err = -ENOMEM;
unsigned long used;
size_t outlen;
size_t usedpages = 0;
+ size_t paddlen = 0;
+ char *paddbuf;

lock_sock(sk);
if (ctx->more) {
@@ -451,17 +469,28 @@ static int aead_recvmsg_async(struct socket *sock, struct msghdr *msg,
aead_async_cb, sk);
used -= ctx->aead_assoclen + (ctx->enc ? as : 0);

+ if (ctx->enc && ctx->type == ALG_AEAD_TLS)
+ paddlen = bs - ((used + as) % bs);
+
+ outlen += paddlen;
+ areq->padded = !!paddlen;
+
/* take over all tx sgls from ctx */
- areq->tsgl = sock_kmalloc(sk, sizeof(*areq->tsgl) * sgl->cur,
- GFP_KERNEL);
+ areq->tsgl = sock_kmalloc(sk, sizeof(*areq->tsgl) *
+ (sgl->cur + !!paddlen), GFP_KERNEL);
if (unlikely(!areq->tsgl))
goto free;

- sg_init_table(areq->tsgl, sgl->cur);
+ sg_init_table(areq->tsgl, sgl->cur + !!paddlen);
for (i = 0; i < sgl->cur; i++)
sg_set_page(&areq->tsgl[i], sg_page(&sgl->sg[i]),
sgl->sg[i].length, sgl->sg[i].offset);

+ if (paddlen) {
+ paddbuf = areq->iv + crypto_aead_ivsize(tfm);
+ sg_set_buf(&areq->tsgl[sgl->cur], paddbuf, paddlen);
+ }
+
areq->tsgls = sgl->cur;

/* create rx sgls */
@@ -530,7 +559,8 @@ free:
sock_kfree_s(sk, rsgl, sizeof(*rsgl));
}
if (areq->tsgl)
- sock_kfree_s(sk, areq->tsgl, sizeof(*areq->tsgl) * areq->tsgls);
+ sock_kfree_s(sk, areq->tsgl, sizeof(*areq->tsgl) *
+ (areq->tsgls + !!paddlen));
if (req)
sock_kfree_s(sk, req, reqlen);
unlock:
@@ -544,7 +574,9 @@ static int aead_recvmsg_sync(struct socket *sock, struct msghdr *msg, int flags)
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
struct aead_ctx *ctx = ask->private;
- unsigned as = crypto_aead_authsize(crypto_aead_reqtfm(&ctx->aead_req));
+ struct crypto_aead *tfm = crypto_aead_reqtfm(&ctx->aead_req);
+ unsigned as = crypto_aead_authsize(tfm);
+ unsigned bs = crypto_aead_blocksize(tfm);
struct aead_sg_list *sgl = &ctx->tsgl;
struct aead_async_rsgl *last_rsgl = NULL;
struct aead_async_rsgl *rsgl, *tmp;
@@ -552,6 +584,7 @@ static int aead_recvmsg_sync(struct socket *sock, struct msghdr *msg, int flags)
unsigned long used = 0;
size_t outlen = 0;
size_t usedpages = 0;
+ size_t paddlen = 0;

lock_sock(sk);

@@ -564,10 +597,19 @@ static int aead_recvmsg_sync(struct socket *sock, struct msghdr *msg, int flags)
*
* The memory structure for cipher operation has the following
* structure:
+ *
+ * For IPSec type (authenc):
* AEAD encryption input: assoc data || plaintext
* AEAD encryption output: cipherntext || auth tag
* AEAD decryption input: assoc data || ciphertext || auth tag
* AEAD decryption output: plaintext
+ *
+ * For TLS type (encauth):
+ * AEAD encryption input: assoc data || plaintext
+ * AEAD encryption output: ciphertext, consisting of:
+ * enc(plaintext || auth tag || padding)
+ * AEAD decryption input: assoc data || ciphertext
+ * AEAD decryption output: plaintext
*/

if (ctx->more) {
@@ -598,6 +640,11 @@ static int aead_recvmsg_sync(struct socket *sock, struct msghdr *msg, int flags)
*/
used -= ctx->aead_assoclen + (ctx->enc ? as : 0);

+ if (ctx->enc && ctx->type == ALG_AEAD_TLS)
+ paddlen = bs - ((used + as) % bs);
+
+ outlen += paddlen;
+
/* convert iovecs of output buffers into scatterlists */
while (iov_iter_count(&msg->msg_iter)) {
size_t seglen = min_t(size_t, iov_iter_count(&msg->msg_iter),
@@ -637,7 +684,14 @@ static int aead_recvmsg_sync(struct socket *sock, struct msghdr *msg, int flags)
if (usedpages < outlen)
goto unlock;

- sg_mark_end(sgl->sg + sgl->cur - 1);
+ if (paddlen) {
+ struct scatterlist *padd = sgl->sg + sgl->cur;
+
+ sg_set_buf(padd, ctx->padd, paddlen);
+ sg_mark_end(sgl->sg + sgl->cur);
+ } else {
+ sg_mark_end(sgl->sg + sgl->cur - 1);
+ }
aead_request_set_crypt(&ctx->aead_req, sgl->sg, ctx->first_rsgl.sgl.sg,
used, ctx->iv);
aead_request_set_ad(&ctx->aead_req, ctx->aead_assoclen);
@@ -759,6 +813,7 @@ static void aead_sock_destruct(struct sock *sk)
WARN_ON(atomic_read(&sk->sk_refcnt) != 0);
aead_put_sgl(sk);
sock_kzfree_s(sk, ctx->iv, ivlen);
+ sock_kfree_s(sk, ctx->padd, ivlen);
sock_kfree_s(sk, ctx, ctx->len);
af_alg_release_parent(sk);
}
@@ -776,12 +831,16 @@ static int aead_accept_parent(void *private, struct sock *sk)
memset(ctx, 0, len);

ctx->iv = sock_kmalloc(sk, ivlen, GFP_KERNEL);
- if (!ctx->iv) {
- sock_kfree_s(sk, ctx, len);
- return -ENOMEM;
- }
+ if (!ctx->iv)
+ goto err_free_ctx;
+
memset(ctx->iv, 0, ivlen);

+ ctx->padd = sock_kmalloc(sk, crypto_aead_blocksize(private),
+ GFP_KERNEL);
+ if (!ctx->padd)
+ goto err_free_iv;
+
ctx->len = len;
ctx->used = 0;
ctx->more = 0;
@@ -790,7 +849,7 @@ static int aead_accept_parent(void *private, struct sock *sk)
ctx->tsgl.cur = 0;
ctx->aead_assoclen = 0;
af_alg_init_completion(&ctx->completion);
- sg_init_table(ctx->tsgl.sg, ALG_MAX_PAGES);
+ sg_init_table(ctx->tsgl.sg, ALG_MAX_PAGES + 1);
INIT_LIST_HEAD(&ctx->list);

ask->private = ctx;
@@ -802,6 +861,12 @@ static int aead_accept_parent(void *private, struct sock *sk)
sk->sk_destruct = aead_sock_destruct;

return 0;
+
+err_free_iv:
+ sock_kfree_s(sk, ctx->iv, len);
+err_free_ctx:
+ sock_kfree_s(sk, ctx, len);
+ return -ENOMEM;
}

static const struct af_alg_type algif_type_aead = {

2016-03-07 11:36:35

by Cristian Stoica

[permalink] [raw]
Subject: Re: [PATCH 1/3] crypto: authenc - add TLS type encryption

Hi Tadeusz,


+static int crypto_encauth_dgst_verify(struct aead_request *req,
+ unsigned int flags)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ unsigned int authsize = crypto_aead_authsize(tfm);
+ struct aead_instance *inst = aead_alg_instance(tfm);
+ struct crypto_encauth_ctx *ctx = crypto_aead_ctx(tfm);
+ struct encauth_instance_ctx *ictx = aead_instance_ctx(inst);
+ struct crypto_ahash *auth = ctx->auth;
+ struct encauth_request_ctx *areq_ctx = aead_request_ctx(req);
+ struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
+ u8 *hash = areq_ctx->tail;
+ int i, err = 0, padd_err = 0;
+ u8 paddlen, *ihash;
+ u8 padd[255];
+
+ scatterwalk_map_and_copy(&paddlen, req->dst, req->assoclen +
+ req->cryptlen - 1, 1, 0);
+
+ if (paddlen > 255 || paddlen > req->cryptlen) {
+ paddlen = 1;
+ padd_err = -EBADMSG;
+ }
+
+ scatterwalk_map_and_copy(padd, req->dst, req->assoclen +
+ req->cryptlen - paddlen, paddlen, 0);
+
+ for (i = 0; i < paddlen; i++) {
+ if (padd[i] != paddlen)
+ padd_err = -EBADMSG;
+ }


This part seems to have the same issue my TLS patch has.
See for reference what Andy Lutomirski had to say about it:

http://www.mail-archive.com/linux-crypto%40vger.kernel.org/msg11719.html


Cristian S.

2016-03-07 14:36:13

by Tadeusz Struk

[permalink] [raw]
Subject: Re: [PATCH 1/3] crypto: authenc - add TLS type encryption

Hi Cristian,
On 03/07/2016 01:05 AM, Cristian Stoica wrote:
> Hi Tadeusz,
>
>
> +static int crypto_encauth_dgst_verify(struct aead_request *req,
> + unsigned int flags)
> +{
> + struct crypto_aead *tfm = crypto_aead_reqtfm(req);
> + unsigned int authsize = crypto_aead_authsize(tfm);
> + struct aead_instance *inst = aead_alg_instance(tfm);
> + struct crypto_encauth_ctx *ctx = crypto_aead_ctx(tfm);
> + struct encauth_instance_ctx *ictx = aead_instance_ctx(inst);
> + struct crypto_ahash *auth = ctx->auth;
> + struct encauth_request_ctx *areq_ctx = aead_request_ctx(req);
> + struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
> + u8 *hash = areq_ctx->tail;
> + int i, err = 0, padd_err = 0;
> + u8 paddlen, *ihash;
> + u8 padd[255];
> +
> + scatterwalk_map_and_copy(&paddlen, req->dst, req->assoclen +
> + req->cryptlen - 1, 1, 0);
> +
> + if (paddlen > 255 || paddlen > req->cryptlen) {
> + paddlen = 1;
> + padd_err = -EBADMSG;
> + }
> +
> + scatterwalk_map_and_copy(padd, req->dst, req->assoclen +
> + req->cryptlen - paddlen, paddlen, 0);
> +
> + for (i = 0; i < paddlen; i++) {
> + if (padd[i] != paddlen)
> + padd_err = -EBADMSG;
> + }
>
>
> This part seems to have the same issue my TLS patch has.
> See for reference what Andy Lutomirski had to say about it:
>
> http://www.mail-archive.com/linux-crypto%40vger.kernel.org/msg11719.html

Thanks for reviewing and for pointing this out. I was aware of the timing-side
issues and done everything I could to avoid it. The main issue that allowed the
Lucky Thirteen attack was that the digest wasn't performed at all if the padding
verification failed. This is not an issue here.
The other issue, which is caused by the length of data to digest being dependent
on the padding length is inevitable and there is nothing we can do about it.
As the note in the paper says:
"However, our behavior matches OpenSSL, so we leak only as much as they do."

Thanks,
--
TS

2016-03-08 08:35:42

by Cristian Stoica

[permalink] [raw]
Subject: Re: [PATCH 1/3] crypto: authenc - add TLS type encryption

Hi Tadeusz,

There is also a follow-up in the next paragraph:

"That pretty much sums up the new attack: the side-channel defenses that were hoped to be sufficient were found not to be (again). So the answer, this time I believe, is to make the processing rigorously constant-time."

The author makes new changes and continues instrumenting the code and still finds 20 CPU cycles (out of 18000) difference between medians for different paddings. This small difference was detected also on a timing side-channel - which is the point I'm making.

SSL/TLS is prone to this implementation issue and many user-space libraries got this wrong. It would be good to see some numbers to back-up the claim of timing differences as not being an issue for this one.

Cristian S.


________________________________________
From: Tadeusz Struk <[email protected]>
Sent: Monday, March 7, 2016 4:31 PM
To: Cristian Stoica; [email protected]
Cc: [email protected]; [email protected]; [email protected]
Subject: Re: [PATCH 1/3] crypto: authenc - add TLS type encryption

Hi Cristian,
On 03/07/2016 01:05 AM, Cristian Stoica wrote:
> Hi Tadeusz,
>
>
> +static int crypto_encauth_dgst_verify(struct aead_request *req,
> + unsigned int flags)
> +{
> + struct crypto_aead *tfm = crypto_aead_reqtfm(req);
> + unsigned int authsize = crypto_aead_authsize(tfm);
> + struct aead_instance *inst = aead_alg_instance(tfm);
> + struct crypto_encauth_ctx *ctx = crypto_aead_ctx(tfm);
> + struct encauth_instance_ctx *ictx = aead_instance_ctx(inst);
> + struct crypto_ahash *auth = ctx->auth;
> + struct encauth_request_ctx *areq_ctx = aead_request_ctx(req);
> + struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
> + u8 *hash = areq_ctx->tail;
> + int i, err = 0, padd_err = 0;
> + u8 paddlen, *ihash;
> + u8 padd[255];
> +
> + scatterwalk_map_and_copy(&paddlen, req->dst, req->assoclen +
> + req->cryptlen - 1, 1, 0);
> +
> + if (paddlen > 255 || paddlen > req->cryptlen) {
> + paddlen = 1;
> + padd_err = -EBADMSG;
> + }
> +
> + scatterwalk_map_and_copy(padd, req->dst, req->assoclen +
> + req->cryptlen - paddlen, paddlen, 0);
> +
> + for (i = 0; i < paddlen; i++) {
> + if (padd[i] != paddlen)
> + padd_err = -EBADMSG;
> + }
>
>
> This part seems to have the same issue my TLS patch has.
> See for reference what Andy Lutomirski had to say about it:
>
> http://www.mail-archive.com/linux-crypto%40vger.kernel.org/msg11719.html

Thanks for reviewing and for pointing this out. I was aware of the timing-side
issues and done everything I could to avoid it. The main issue that allowed the
Lucky Thirteen attack was that the digest wasn't performed at all if the padding
verification failed. This is not an issue here.
The other issue, which is caused by the length of data to digest being dependent
on the padding length is inevitable and there is nothing we can do about it.
As the note in the paper says:
"However, our behavior matches OpenSSL, so we leak only as much as they do."

Thanks,

2016-03-08 16:54:32

by Tadeusz Struk

[permalink] [raw]
Subject: Re: [PATCH 1/3] crypto: authenc - add TLS type encryption

Hi Cristian,
On 03/08/2016 12:20 AM, Cristian Stoica wrote:
> There is also a follow-up in the next paragraph:
>
> "That pretty much sums up the new attack: the side-channel defenses that were hoped to be sufficient were found not to be (again). So the answer, this time I believe, is to make the processing rigorously constant-time."
>
> The author makes new changes and continues instrumenting the code and still finds 20 CPU cycles (out of 18000) difference between medians for different paddings. This small difference was detected also on a timing side-channel - which is the point I'm making.
>
> SSL/TLS is prone to this implementation issue and many user-space libraries got this wrong. It would be good to see some numbers to back-up the claim of timing differences as not being an issue for this one.

It is hard to get the implementation right when the protocol design is error prone.
Later we should run some tests on it and see how relevant will this be for a remote timing attack.
Thanks,
--
TS

2016-03-09 08:18:12

by Cristian Stoica

[permalink] [raw]
Subject: Re: [PATCH 1/3] crypto: authenc - add TLS type encryption

Hi Tadeusz,

>> SSL/TLS is prone to this implementation issue and many user-space libraries got this wrong. It would be good to see >>some numbers to back-up the claim of timing differences as not being an issue for this one.

>It is hard to get the implementation right when the protocol design is error prone.
>Later we should run some tests on it and see how relevant will this be for a remote timing attack.

Why later and who will do it?

If it's only a proof of concept, then it's a bad idea. You are practically advertising a use-it-but-cross-your-fingers implementation.
If you intend to submit another hardware driver which _is_ constant time, then it is even more a bad idea. The end-user doesn't know which driver is actually running and if it is resistant or not to timing attacks.

Cristian S.

2016-04-05 11:29:47

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 0/3] crypto: af_alg - add TLS type encryption

On Sat, Mar 05, 2016 at 05:20:44PM -0800, Tadeusz Struk wrote:
> Hi,
> The following series adds TLS type authentication. To do this a new
> template, encauth, is introduced. It is derived from the existing authenc
> template and modified to work in "first auth then encrypt" mode.
> The algif interface is also changed to work with the new authentication type.

What is the point of this patch-set? Who is going to be the user?

Also you're including padding into the algorithm. That goes against
the way we implemented IPsec. What is the justification for doing
it in the crypto layer instead of the protocol layer?

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2016-04-06 18:01:20

by Tadeusz Struk

[permalink] [raw]
Subject: Re: [PATCH 0/3] crypto: af_alg - add TLS type encryption

Hi Herbert,
On 04/05/2016 04:29 AM, Herbert Xu wrote:
> On Sat, Mar 05, 2016 at 05:20:44PM -0800, Tadeusz Struk wrote:
>> > Hi,
>> > The following series adds TLS type authentication. To do this a new
>> > template, encauth, is introduced. It is derived from the existing authenc
>> > template and modified to work in "first auth then encrypt" mode.
>> > The algif interface is also changed to work with the new authentication type.
> What is the point of this patch-set? Who is going to be the user?

The intend is to enable HW acceleration of the TLS protocol.
The way it will work is that the user space will send a packet of data
via AF_ALG and HW will authenticate and encrypt it in one go.

>
> Also you're including padding into the algorithm. That goes against
> the way we implemented IPsec. What is the justification for doing
> it in the crypto layer instead of the protocol layer?

This is because of how the TLS protocol work. In IPSEC the stack does the job
of aligning the packet to block size and the crypto layer doesn't need to worry
about padding. In TLS we need to make sure that after auth the buff is still
block size align, and that is why we need padding.
Do you think we should make the user to provide the data in a big enough buffer
to accommodate the digest and padding and the padding itself?
Thanks,
--
TS

2016-04-08 02:53:02

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 0/3] crypto: af_alg - add TLS type encryption

On Wed, Apr 06, 2016 at 10:56:12AM -0700, Tadeusz Struk wrote:
>
> The intend is to enable HW acceleration of the TLS protocol.
> The way it will work is that the user space will send a packet of data
> via AF_ALG and HW will authenticate and encrypt it in one go.

There have been suggestions to implement TLS data-path within
the kernel. So we should decide whether we pursue that or go
with your approach before we start adding algorithms.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2016-04-08 02:58:50

by Tom Herbert

[permalink] [raw]
Subject: Re: [PATCH 0/3] crypto: af_alg - add TLS type encryption

On Thu, Apr 7, 2016 at 11:52 PM, Herbert Xu <[email protected]> wrote:
> On Wed, Apr 06, 2016 at 10:56:12AM -0700, Tadeusz Struk wrote:
>>
>> The intend is to enable HW acceleration of the TLS protocol.
>> The way it will work is that the user space will send a packet of data
>> via AF_ALG and HW will authenticate and encrypt it in one go.
>
> There have been suggestions to implement TLS data-path within
> the kernel. So we should decide whether we pursue that or go
> with your approach before we start adding algorithms.
>
Yes, please see Dave Watson's patches on this.

Tom

> Cheers,
> --
> Email: Herbert Xu <[email protected]>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2016-04-12 11:13:27

by Fridolin Pokorny

[permalink] [raw]
Subject: Re: [PATCH 0/3] crypto: af_alg - add TLS type encryption



On 08.04.2016 04:58, Tom Herbert wrote:
> On Thu, Apr 7, 2016 at 11:52 PM, Herbert Xu <[email protected]> wrote:
>> On Wed, Apr 06, 2016 at 10:56:12AM -0700, Tadeusz Struk wrote:
>>>
>>> The intend is to enable HW acceleration of the TLS protocol.
>>> The way it will work is that the user space will send a packet of data
>>> via AF_ALG and HW will authenticate and encrypt it in one go.
>>
>> There have been suggestions to implement TLS data-path within
>> the kernel. So we should decide whether we pursue that or go
>> with your approach before we start adding algorithms.
>>
> Yes, please see Dave Watson's patches on this.
>


Hi Tadeusz,

we were experimenting with this. We have a prove of concept of a kernel
TLS type socket, so called AF_KTLS, which is based on Dave Watson's
RFC5288 patch. It handles both TLS and DTLS, unfortunately it is not
ready now to be proposed here. There are still issues which should be
solved (but mostly user space API design) [1]. If you are interested, we
could combine efforts.

Regards,
Fridolin Pokorny

[1] https://github.com/fridex/af_ktls

2016-04-13 22:51:44

by Tadeusz Struk

[permalink] [raw]
Subject: Re: [PATCH 0/3] crypto: af_alg - add TLS type encryption

Hi Fridolin,
On 04/12/2016 04:13 AM, Fridolin Pokorny wrote:
> we were experimenting with this. We have a prove of concept of a kernel
> TLS type socket, so called AF_KTLS, which is based on Dave Watson's
> RFC5288 patch. It handles both TLS and DTLS, unfortunately it is not
> ready now to be proposed here. There are still issues which should be
> solved (but mostly user space API design) [1]. If you are interested, we
> could combine efforts.
>
> Regards,
> Fridolin Pokorny
>
> [1] https://github.com/fridex/af_ktls

I had a quick look and it looks like is limited only to gcm(aes).
I would be more interested to have a generic interface that could do generic algorithm
suits like aes-cbc-hmac-sha1 also.
This also seems to work in a synchronous (send one and wait) mode, which is a not good
solution for HW accelerators, which I'm trying to enable.
Thanks,
--
TS

2016-04-14 06:47:54

by Nikos Mavrogiannopoulos

[permalink] [raw]
Subject: Re: [PATCH 0/3] crypto: af_alg - add TLS type encryption

On Thu, Apr 14, 2016 at 12:46 AM, Tadeusz Struk <[email protected]> wrote:
> Hi Fridolin,
> On 04/12/2016 04:13 AM, Fridolin Pokorny wrote:
>> we were experimenting with this. We have a prove of concept of a kernel
>> TLS type socket, so called AF_KTLS, which is based on Dave Watson's
>> RFC5288 patch. It handles both TLS and DTLS, unfortunately it is not
>> ready now to be proposed here. There are still issues which should be
>> solved (but mostly user space API design) [1]. If you are interested, we
>> could combine efforts.
>>
>> Regards,
>> Fridolin Pokorny
>>
>> [1] https://github.com/fridex/af_ktls
> I had a quick look and it looks like is limited only to gcm(aes).
> I would be more interested to have a generic interface that could do generic algorithm
> suits like aes-cbc-hmac-sha1 also.

This is not a real limitation but an advantage. The cbc-hmac-sha1
needs a lot of hacks to be implemented correct (just take a look at
one of the existing implementations). There is no point to bring such
hacks into kernel especially since these ciphersuites are banned from
HTTP/2.0 (see RFC7540), and have been dropped from TLS 1.3.

> This also seems to work in a synchronous (send one and wait) mode, which is a not good
> solution for HW accelerators, which I'm trying to enable.

Is that something that cannot be addressed?

regards,
Nikos