Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp300832ybd; Tue, 25 Jun 2019 21:50:06 -0700 (PDT) X-Google-Smtp-Source: APXvYqzXt6/MDxOP90d0D2Y73m/Njp8IMNcCGT1JZ/LloZv3z28p4qWmFg8IgLJz7oFqwoEulia8 X-Received: by 2002:a17:902:2ac1:: with SMTP id j59mr2952851plb.156.1561524606220; Tue, 25 Jun 2019 21:50:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561524606; cv=none; d=google.com; s=arc-20160816; b=U73dM3o0hiSWrKHFdsBoMYqgehPd+vdTKuY8V+E9VdUl9CSI8g58+6tU5/kK6ggOFV 9Zwuh+YoTkpwwkUeraFlgsRYfVCMpxucCZq+/WUntq/fN7NkX+uwiZiQRxjbKuANg12m 4S3Bys0/0ccaQnfntS4ucB6NICUe79OlMAKC2GicrHwz9sziDHbLgD691Slo6Z6d4PCf JICjWjTw4veAsJgB4x30bRu1EJBobKP/5OMTIcG38GNXgVyhxSA8Q0N+oF3YrOmdki+D 4KyOKNE37aZLWZsadKDGEfXpEiaT/BO48UoKC3tCBA6xOyFtXgVJBljkqBWtO3P9LCCY xXhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=kEsfsU/ysKkpyH15d9tu0XKOR65rxarReQXlQq3E9eI=; b=H6JidxT1WSUTFOaa4BwYqsTBbx9qPs50IrKAzHQUn15XoTfU7rSoGPf+ik/yLQvaB8 sK0Vx7WWLuZ5uDVehPLz3udw08PEvXQipnKkYu/AP9YmlGy1S6QRwM+pW/KFb02Tjpbf PUPgXVaUP1qBioP2zF+ODlHDg48PNc4zHBLztUq96C2q8ikGgc1XAZMJfePDHroOIvnR My/JVQT1pI+1FS2uSj0u4VIXXWT8YpUJ9QO5i7+bokuBmDk0V1803EAt3JEU4wWm+5f+ 6V9YAgYnVNNoYxM6qBOqDrzli9zYDw+UkD/33qEvJ9C06qa3AuWkX52+ddu4iWpfHHwx 6eFA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ODbHC0SQ; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r9si14753185pgv.272.2019.06.25.21.49.48; Tue, 25 Jun 2019 21:50:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ODbHC0SQ; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725954AbfFZEtq (ORCPT + 99 others); Wed, 26 Jun 2019 00:49:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:43686 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725308AbfFZEtp (ORCPT ); Wed, 26 Jun 2019 00:49:45 -0400 Received: from sol.localdomain (c-24-5-143-220.hsd1.ca.comcast.net [24.5.143.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 083652080C; Wed, 26 Jun 2019 04:49:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1561524584; bh=tkCPbp9GSHOv3qLaNI5Ff3KJwFFeN2RzUsEB0O+zzRU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ODbHC0SQ+Sf84IQkNvVU0OXjS8gzH+muY4UJH46J9ie8blnmrTKSjofteqWf3oSFS LMqd7XFj6Fxu6gkyYRYgl3mKHvrpeApAKQ18h+wg1mGQuttnKlGlKcFWXsGVCmIrGi oGKf0CcxLW4ny8ksnRjw8xxNQQ7xMUF0GY9P/sI8= Date: Tue, 25 Jun 2019 21:49:42 -0700 From: Eric Biggers To: Ard Biesheuvel Cc: Herbert Xu , linux-fscrypt@vger.kernel.org, Gilad Ben-Yossef , device-mapper development , "open list:HARDWARE RANDOM NUMBER GENERATOR CORE" , Milan Broz Subject: Re: [dm-devel] [PATCH v4 0/6] crypto: switch to crypto API for ESSIV generation Message-ID: <20190626044942.GB23471@sol.localdomain> References: <20190621080918.22809-1-ard.biesheuvel@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.12.1 (2019-06-15) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Sun, Jun 23, 2019 at 11:30:41AM +0200, Ard Biesheuvel wrote: > On Fri, 21 Jun 2019 at 10:09, Ard Biesheuvel wrote: > > > > From: Ard Biesheuvel > > > ... > > > > - given that hardware already exists that can perform en/decryption including > > ESSIV generation of a range of blocks, it would be useful to encapsulate > > this in the ESSIV template, and teach at least dm-crypt how to use it > > (given that it often processes 8 512-byte sectors at a time) > > I thought about this a bit more, and it occurred to me that the > capability of issuing several sectors at a time and letting the lower > layers increment the IV between sectors is orthogonal to whether ESSIV > is being used or not, and so it probably belongs in another wrapper. > > I.e., if we define a skcipher template like dmplain64le(), which is > defined as taking a sector size as part of the key, and which > increments a 64 LE counter between sectors if multiple are passed, it > can be used not only for ESSIV but also for XTS, which I assume can be > h/w accelerated in the same way. > > So with that in mind, I think we should decouple the multi-sector > discussion and leave it for a followup series, preferably proposed by > someone who also has access to some hardware to prototype it on. > This makes sense, but if we're going to leave that functionality out of the essiv template, I think we should revisit whether the essiv template takes a __le64 sector number vs. just an IV matching the cipher block size. To me, defining the IV to be a __le64 seems like a layering violation. Also, dm-crypt and fscrypt already know how to zero-pad the sector number to form the full 16 byte IV, and your patch just duplicates that logic in the essiv template too, which makes it more complicated than necessary. E.g., the following incremental patch for the skcipher case would simplify it: (You'd have to do it for the AEAD case too.) diff --git a/crypto/essiv.c b/crypto/essiv.c index 8e80814ec7d6..737e92ebcbd8 100644 --- a/crypto/essiv.c +++ b/crypto/essiv.c @@ -57,11 +57,6 @@ struct essiv_tfm_ctx { struct crypto_shash *hash; }; -struct essiv_skcipher_request_ctx { - u8 iv[MAX_INNER_IV_SIZE]; - struct skcipher_request skcipher_req; -}; - struct essiv_aead_request_ctx { u8 iv[2][MAX_INNER_IV_SIZE]; struct scatterlist src[4], dst[4]; @@ -161,39 +156,32 @@ static void essiv_skcipher_done(struct crypto_async_request *areq, int err) skcipher_request_complete(req, err); } -static void essiv_skcipher_prepare_subreq(struct skcipher_request *req) +static int essiv_skcipher_crypt(struct skcipher_request *req, bool enc) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); const struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); - struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req); - struct skcipher_request *subreq = &rctx->skcipher_req; - - memset(rctx->iv, 0, crypto_cipher_blocksize(tctx->essiv_cipher)); - memcpy(rctx->iv, req->iv, crypto_skcipher_ivsize(tfm)); + struct skcipher_request *subreq = skcipher_request_ctx(req); - crypto_cipher_encrypt_one(tctx->essiv_cipher, rctx->iv, rctx->iv); + crypto_cipher_encrypt_one(tctx->essiv_cipher, req->iv, req->iv); skcipher_request_set_tfm(subreq, tctx->u.skcipher); skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, - rctx->iv); + req->iv); skcipher_request_set_callback(subreq, skcipher_request_flags(req), essiv_skcipher_done, req); + + return enc ? crypto_skcipher_encrypt(subreq) : + crypto_skcipher_decrypt(subreq); } static int essiv_skcipher_encrypt(struct skcipher_request *req) { - struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req); - - essiv_skcipher_prepare_subreq(req); - return crypto_skcipher_encrypt(&rctx->skcipher_req); + return essiv_skcipher_crypt(req, true); } static int essiv_skcipher_decrypt(struct skcipher_request *req) { - struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req); - - essiv_skcipher_prepare_subreq(req); - return crypto_skcipher_decrypt(&rctx->skcipher_req); + return essiv_skcipher_crypt(req, false); } static void essiv_aead_done(struct crypto_async_request *areq, int err) @@ -300,24 +288,14 @@ static int essiv_skcipher_init_tfm(struct crypto_skcipher *tfm) struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst); struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); struct crypto_skcipher *skcipher; - unsigned int subreq_size; int err; - BUILD_BUG_ON(offsetofend(struct essiv_skcipher_request_ctx, - skcipher_req) != - sizeof(struct essiv_skcipher_request_ctx)); - skcipher = crypto_spawn_skcipher(&ictx->u.skcipher_spawn); if (IS_ERR(skcipher)) return PTR_ERR(skcipher); - subreq_size = FIELD_SIZEOF(struct essiv_skcipher_request_ctx, - skcipher_req) + - crypto_skcipher_reqsize(skcipher); - - crypto_skcipher_set_reqsize(tfm, - offsetof(struct essiv_skcipher_request_ctx, - skcipher_req) + subreq_size); + crypto_skcipher_set_reqsize(tfm, sizeof(struct skcipher_request) + + crypto_skcipher_reqsize(skcipher)); err = essiv_init_tfm(ictx, tctx); if (err) { @@ -567,9 +545,9 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb) skcipher_inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(skcipher_alg); skcipher_inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(skcipher_alg); - skcipher_inst->alg.ivsize = ESSIV_IV_SIZE; - skcipher_inst->alg.chunksize = skcipher_alg->chunksize; - skcipher_inst->alg.walksize = skcipher_alg->walksize; + skcipher_inst->alg.ivsize = crypto_skcipher_alg_ivsize(skcipher_alg); + skcipher_inst->alg.chunksize = crypto_skcipher_alg_chunksize(skcipher_alg); + skcipher_inst->alg.walksize = crypto_skcipher_alg_walksize(skcipher_alg); skcipher_inst->free = essiv_skcipher_free_instance;