Received: by 10.223.185.116 with SMTP id b49csp3270959wrg; Mon, 12 Feb 2018 23:50:31 -0800 (PST) X-Google-Smtp-Source: AH8x225kGJT4mCHDzsilzEwQ1mLrXS4LxRWvRc0pYZYqLqoJXfW1sv5n741AkXOMuKCqWANIUYtz X-Received: by 10.99.153.1 with SMTP id d1mr305041pge.338.1518508231030; Mon, 12 Feb 2018 23:50:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518508230; cv=none; d=google.com; s=arc-20160816; b=jrVr+qYMTLUYNzfi1CNeadD02kKyRKsujfiNRaTliAOuSOHEzP+4t0qIP2OGRPMcgO PYug7blpDFbuGBlR52C05z3rAVfHdj6xYqO1lHOb7Nv28pco5yd1qoqB2ChCP4DpKrIc NP5YaY/hQeLR52StUJ9ZHJXDU3Cq+vuqM5dzaP5rHTe72ivVmAE2Rx059il2p/bicj+J 8zPoPB9t4H4tp346ibwekw47wxzIseZjBFR2yz0BE8aMTabDl5xvDHnrUzTVxFdAFa8s RnBKYWzBAmMSbAzNwWPD6XiMnLP1Oqt3GGL3v8xW/OADotT+34K0zyM9Mpr89o1uIReK 6I+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=e71BVvH92q9RdcEi9hiCB7WPdy4k3JWLFYhZNE7Px1g=; b=qnibJXrucPHc/tSWniJVdsykFJ/c5ihG7546NjdC8roM/ZCBUAqF/SHTf9ebTZVEeK Qq8n97jvyYpxEwPTGkkbNEJSqeTSIjVprGGOmQIRWmlmO8vbRnB1D1kulJ4BiHlOzL/c qzedROHWXWgH1M+VVfIQPKrD7t7Qvs9xWrSiMDJBHVv+KavPW/nzTQXp3BFHxbcXH7kL 9TfqgYXvKI3FAHA1OMH4BNGE0YxWeb2CCTMBi5qBLGgijgniB1Lb8IxR3RTGS9gYAkl+ v3unqcDhLs0v9WyCiPKcHzYBiLoFAr6tW6PXftRabVmtOV7hqhm5EO+WpzgwTNefLO/b Cblw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chronox.de header.s=strato-dkim-0002 header.b=laUZuZ4h; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t24si1500847pfe.341.2018.02.12.23.50.16; Mon, 12 Feb 2018 23:50:30 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@chronox.de header.s=strato-dkim-0002 header.b=laUZuZ4h; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933637AbeBMHtM (ORCPT + 99 others); Tue, 13 Feb 2018 02:49:12 -0500 Received: from mo4-p00-ob.smtp.rzone.de ([81.169.146.162]:21099 "EHLO mo4-p00-ob.smtp.rzone.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933607AbeBMHtL (ORCPT ); Tue, 13 Feb 2018 02:49:11 -0500 X-Greylist: delayed 361 seconds by postgrey-1.27 at vger.kernel.org; Tue, 13 Feb 2018 02:49:11 EST DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1518508150; s=strato-dkim-0002; d=chronox.de; h=Content-Type:Content-Transfer-Encoding:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH; bh=e71BVvH92q9RdcEi9hiCB7WPdy4k3JWLFYhZNE7Px1g=; b=laUZuZ4h9Yggdo25xS7q48yZ/1SpfO13Dm2aQmmuq088RcpXBkHmArWerwaaLrTgE2 IFnwwaTFIfcy7jmLaXNVEE7q7ZX9+lhhvvWCntf3B6wR4Xqq18eerdGBsxvw05/Th/B/ uSDXlH8FNOHc6YDwaHRGu9bs0F8mv50YxhdlbWOT2q7R9WBK+G1BET4K9ahMjcQu9BA5 dlY2H4rWLWoYzFIW3l3hDYjmoEjrvCiz2514qlT6Pd+RWszaMzUGGuCLAW/eEq2X9wyW gF5dF4FuTpiaGa4d7xikaME5T5TFzcfEAkg9AOg7R4JJ7YLu6tUXQMXeq9zAQGk3PgfD wtLA== X-RZG-AUTH: :P2ERcEykfu11Y98lp/T7+hdri+uKZK8TKWEqNyiHySGSa9k9zW4DNhHoQE+naq7U39yoJlr72u0U4Yi51+Y5FKO68uljLr2uOgrX X-RZG-CLASS-ID: mo00 Received: from tauon.chronox.de (200116b8427dc1001fae43a5052d9fc3.dip.versatel-1u1.de [IPv6:2001:16b8:427d:c100:1fae:43a5:52d:9fc3]) by smtp.strato.de (RZmta 42.18 AUTH) with ESMTPSA id 00b415u1D7go6Ko (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (curve secp521r1 with 521 ECDH bits, eq. 15360 bits RSA)) (Client did not present a certificate); Tue, 13 Feb 2018 08:42:50 +0100 (CET) From: Stephan Mueller To: Dave Watson Cc: Herbert Xu , Junaid Shahid , Steffen Klassert , linux-crypto@vger.kernel.org, "David S. Miller" , Hannes Frederic Sowa , Tim Chen , Sabrina Dubroca , linux-kernel@vger.kernel.org, Ilya Lesokhin Subject: Re: [PATCH 14/14] x86/crypto: aesni: Update aesni-intel_glue to use scatter/gather Date: Tue, 13 Feb 2018 08:42:50 +0100 Message-ID: <54235286.FU8BX9VrCl@tauon.chronox.de> In-Reply-To: <20180212195128.GA61087@davejwatson-mba.local> References: <20180212195128.GA61087@davejwatson-mba.local> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Am Montag, 12. Februar 2018, 20:51:28 CET schrieb Dave Watson: Hi Dave, > Add gcmaes_en/decrypt_sg routines, that will do scatter/gather > by sg. Either src or dst may contain multiple buffers, so > iterate over both at the same time if they are different. > If the input is the same as the output, iterate only over one. > > Currently both the AAD and TAG must be linear, so copy them out > with scatterlist_map_and_copy. > > Only the SSE routines are updated so far, so leave the previous > gcmaes_en/decrypt routines, and branch to the sg ones if the > keysize is inappropriate for avx, or we are SSE only. > > Signed-off-by: Dave Watson > --- > arch/x86/crypto/aesni-intel_glue.c | 166 > +++++++++++++++++++++++++++++++++++++ 1 file changed, 166 insertions(+) > > diff --git a/arch/x86/crypto/aesni-intel_glue.c > b/arch/x86/crypto/aesni-intel_glue.c index de986f9..1e32fbe 100644 > --- a/arch/x86/crypto/aesni-intel_glue.c > +++ b/arch/x86/crypto/aesni-intel_glue.c > @@ -791,6 +791,82 @@ static int generic_gcmaes_set_authsize(struct > crypto_aead *tfm, return 0; > } > > +static int gcmaes_encrypt_sg(struct aead_request *req, unsigned int > assoclen, + u8 *hash_subkey, u8 *iv, void *aes_ctx) > +{ > + struct crypto_aead *tfm = crypto_aead_reqtfm(req); > + unsigned long auth_tag_len = crypto_aead_authsize(tfm); > + struct gcm_context_data data AESNI_ALIGN_ATTR; > + struct scatter_walk dst_sg_walk = {}; > + unsigned long left = req->cryptlen; > + unsigned long len, srclen, dstlen; > + struct scatter_walk src_sg_walk; > + struct scatterlist src_start[2]; > + struct scatterlist dst_start[2]; > + struct scatterlist *src_sg; > + struct scatterlist *dst_sg; > + u8 *src, *dst, *assoc; > + u8 authTag[16]; > + > + assoc = kmalloc(assoclen, GFP_ATOMIC); > + if (unlikely(!assoc)) > + return -ENOMEM; > + scatterwalk_map_and_copy(assoc, req->src, 0, assoclen, 0); Have you tested that this code does not barf when assoclen is 0? Maybe it is worth while to finally add a test vector to testmgr.h which validates such scenario. If you would like, here is a vector you could add to testmgr: https://github.com/smuellerDD/libkcapi/blob/master/test/test.sh#L315 This is a decryption of gcm(aes) with no message, no AAD and just a tag. The result should be EBADMSG. > + > + src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen); Why do you use assoclen in the map_and_copy, and req->assoclen in the ffwd? > + scatterwalk_start(&src_sg_walk, src_sg); > + if (req->src != req->dst) { > + dst_sg = scatterwalk_ffwd(dst_start, req->dst, req->assoclen); Dto: req->assoclen or assoclen? > + scatterwalk_start(&dst_sg_walk, dst_sg); > + } > + > + kernel_fpu_begin(); > + aesni_gcm_init(aes_ctx, &data, iv, > + hash_subkey, assoc, assoclen); > + if (req->src != req->dst) { > + while (left) { > + src = scatterwalk_map(&src_sg_walk); > + dst = scatterwalk_map(&dst_sg_walk); > + srclen = scatterwalk_clamp(&src_sg_walk, left); > + dstlen = scatterwalk_clamp(&dst_sg_walk, left); > + len = min(srclen, dstlen); > + if (len) > + aesni_gcm_enc_update(aes_ctx, &data, > + dst, src, len); > + left -= len; > + > + scatterwalk_unmap(src); > + scatterwalk_unmap(dst); > + scatterwalk_advance(&src_sg_walk, len); > + scatterwalk_advance(&dst_sg_walk, len); > + scatterwalk_done(&src_sg_walk, 0, left); > + scatterwalk_done(&dst_sg_walk, 1, left); > + } > + } else { > + while (left) { > + dst = src = scatterwalk_map(&src_sg_walk); > + len = scatterwalk_clamp(&src_sg_walk, left); > + if (len) > + aesni_gcm_enc_update(aes_ctx, &data, > + src, src, len); > + left -= len; > + scatterwalk_unmap(src); > + scatterwalk_advance(&src_sg_walk, len); > + scatterwalk_done(&src_sg_walk, 1, left); > + } > + } > + aesni_gcm_finalize(aes_ctx, &data, authTag, auth_tag_len); > + kernel_fpu_end(); > + > + kfree(assoc); > + > + /* Copy in the authTag */ > + scatterwalk_map_and_copy(authTag, req->dst, > + req->assoclen + req->cryptlen, > + auth_tag_len, 1); > + return 0; > +} > + > static int gcmaes_encrypt(struct aead_request *req, unsigned int assoclen, > u8 *hash_subkey, u8 *iv, void *aes_ctx) > { > @@ -802,6 +878,11 @@ static int gcmaes_encrypt(struct aead_request *req, > unsigned int assoclen, struct scatter_walk dst_sg_walk = {}; > struct gcm_context_data data AESNI_ALIGN_ATTR; > > + if (((struct crypto_aes_ctx *)aes_ctx)->key_length != AES_KEYSIZE_128 || > + aesni_gcm_enc_tfm == aesni_gcm_enc) { > + return gcmaes_encrypt_sg(req, assoclen, hash_subkey, iv, > + aes_ctx); > + } > if (sg_is_last(req->src) && > (!PageHighMem(sg_page(req->src)) || > req->src->offset + req->src->length <= PAGE_SIZE) && > @@ -854,6 +935,86 @@ static int gcmaes_encrypt(struct aead_request *req, > unsigned int assoclen, return 0; > } > > +static int gcmaes_decrypt_sg(struct aead_request *req, unsigned int > assoclen, + u8 *hash_subkey, u8 *iv, void *aes_ctx) > +{ This is a lot of code duplication. > + struct crypto_aead *tfm = crypto_aead_reqtfm(req); > + unsigned long auth_tag_len = crypto_aead_authsize(tfm); > + unsigned long left = req->cryptlen - auth_tag_len; > + struct gcm_context_data data AESNI_ALIGN_ATTR; > + struct scatter_walk dst_sg_walk = {}; > + unsigned long len, srclen, dstlen; > + struct scatter_walk src_sg_walk; > + struct scatterlist src_start[2]; > + struct scatterlist dst_start[2]; > + struct scatterlist *src_sg; > + struct scatterlist *dst_sg; > + u8 *src, *dst, *assoc; > + u8 authTagGen[16]; > + u8 authTag[16]; > + > + assoc = kmalloc(assoclen, GFP_ATOMIC); > + if (unlikely(!assoc)) > + return -ENOMEM; > + scatterwalk_map_and_copy(assoc, req->src, 0, assoclen, 0); > + > + src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen); > + scatterwalk_start(&src_sg_walk, src_sg); > + if (req->src != req->dst) { > + dst_sg = scatterwalk_ffwd(dst_start, req->dst, req->assoclen); > + scatterwalk_start(&dst_sg_walk, dst_sg); > + } > + > + kernel_fpu_begin(); > + aesni_gcm_init(aes_ctx, &data, iv, > + hash_subkey, assoc, assoclen); > + if (req->src != req->dst) { > + while (left) { > + src = scatterwalk_map(&src_sg_walk); > + dst = scatterwalk_map(&dst_sg_walk); > + srclen = scatterwalk_clamp(&src_sg_walk, left); > + dstlen = scatterwalk_clamp(&dst_sg_walk, left); > + len = min(srclen, dstlen); > + if (len) > + aesni_gcm_dec_update(aes_ctx, &data, > + dst, src, len); > + left -= len; > + > + scatterwalk_unmap(src); > + scatterwalk_unmap(dst); > + scatterwalk_advance(&src_sg_walk, len); > + scatterwalk_advance(&dst_sg_walk, len); > + scatterwalk_done(&src_sg_walk, 0, left); > + scatterwalk_done(&dst_sg_walk, 1, left); > + } > + } else { > + while (left) { > + dst = src = scatterwalk_map(&src_sg_walk); > + len = scatterwalk_clamp(&src_sg_walk, left); > + if (len) > + aesni_gcm_dec_update(aes_ctx, &data, > + src, src, len); > + left -= len; > + scatterwalk_unmap(src); > + scatterwalk_advance(&src_sg_walk, len); > + scatterwalk_done(&src_sg_walk, 1, left); > + } > + } > + aesni_gcm_finalize(aes_ctx, &data, authTagGen, auth_tag_len); > + kernel_fpu_end(); > + > + kfree(assoc); > + > + /* Copy out original authTag */ > + scatterwalk_map_and_copy(authTag, req->src, > + req->assoclen + req->cryptlen - auth_tag_len, > + auth_tag_len, 0); > + > + /* Compare generated tag with passed in tag. */ > + return crypto_memneq(authTagGen, authTag, auth_tag_len) ? > + -EBADMSG : 0; > +} > + > static int gcmaes_decrypt(struct aead_request *req, unsigned int assoclen, > u8 *hash_subkey, u8 *iv, void *aes_ctx) > { > @@ -868,6 +1029,11 @@ static int gcmaes_decrypt(struct aead_request *req, > unsigned int assoclen, struct gcm_context_data data AESNI_ALIGN_ATTR; > int retval = 0; > > + if (((struct crypto_aes_ctx *)aes_ctx)->key_length != AES_KEYSIZE_128 || > + aesni_gcm_enc_tfm == aesni_gcm_enc) { > + return gcmaes_decrypt_sg(req, assoclen, hash_subkey, iv, > + aes_ctx); > + } > tempCipherLen = (unsigned long)(req->cryptlen - auth_tag_len); > > if (sg_is_last(req->src) && Ciao Stephan