Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp6803558ybi; Wed, 29 May 2019 13:28:13 -0700 (PDT) X-Google-Smtp-Source: APXvYqz7BODIIK+rbhBcjIe7CFUC3UoL4CJgTKxkDDtzbVvKT8aJ/pj/IRDvy52XKKFd25ZqbDts X-Received: by 2002:a17:90a:e17:: with SMTP id v23mr14377855pje.139.1559161692995; Wed, 29 May 2019 13:28:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559161692; cv=none; d=google.com; s=arc-20160816; b=RFUknjGeF0r9gP5lgiJmTFEmrrDvU6a5hv4oSHVv0K9wBZAhTLYNyKNkEZ11Pv6OOe xeccetCQuuyYnkLmrEwXYWy0EunWCdAj7mkJteGX9B9wkb5RT63ksPjTLck1kYBf1WcH oHXdhAFkrLm8SLc0ezvI8Z930Bp8KOckP4jpFFjVxPqKAxJccg33BX82SMtTkIDg63cx dmuLTHNdiHc+mjgd6p+DHIouFnZmGkgfYEbZlo+D3x9cdlvfG/sHf1MgxfaATAVB0zBd Ujgws+Z3w1T831iP0RaqRJeRyByjtTMQDA49TPDy8qgmYyghURczHBPIgu3yfiEiexos tSRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=fQ02GO160pUCqa8XVKzQsrI3Xns4zkJ0VYxB0wUiIV0=; b=OnjbLP0fxn4FHzE2xxnuu9VJQZOUYj4BtUe9OvWRreGzw8R3K8gOieqFyO0CDsJbCx fAUqblCK3VuGB9+HaYYRnIbUfF+S1iy6XKBFdlcMBxPvZz5kNRKJU1PK6FyO1BLy+YgQ umy9KqOQfzzpVJhY4tMRGyRFDJBB2VLjjOLAHLUvptxd3XQU4klml6tIWtsyuH/6KuAz pfY89kJtAJ4g9DnwR63UDQqadzdaQtzEgT019XqN75gLyMXJCPZyvNXtqYMEAAjxgu/P 1qp3Ec08BfopGdwDG4CADg5Mccko/FjuQ70GSslF+cZTXk7wULmotCwlHQK3qzglQihg hGqw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=0mfqRccY; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s2si805438pgg.225.2019.05.29.13.27.58; Wed, 29 May 2019 13:28:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=0mfqRccY; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726576AbfE2U1b (ORCPT + 99 others); Wed, 29 May 2019 16:27:31 -0400 Received: from mail.kernel.org ([198.145.29.99]:49720 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725990AbfE2U1b (ORCPT ); Wed, 29 May 2019 16:27:31 -0400 Received: from gmail.com (unknown [104.132.1.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 61C3424168; Wed, 29 May 2019 20:27:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1559161650; bh=L8h2IoFAWDipnv/lvbb9bdY6ESNQY9JFCL26yJfVOT8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=0mfqRccY09gmih0Dw2ejbDdKRKJQaa0uPvjwsC/WZFqRAXgvKAmo4GRKv6FKj7a9x qTLV0v3pN/dqea4zonxE7EPeeUhiDb6sXIOFQf3q+6tOczkip6rZpHgPstwK4txNDp VoPkTpX8qMKZXGG4pQr3PKT1JV9OZCMU9DgM85RE= Date: Wed, 29 May 2019 13:27:28 -0700 From: Eric Biggers To: Iuliana Prodan Cc: Herbert Xu , "David S. Miller" , Ard Biesheuvel , Horia Geanta , Sascha Hauer , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx Subject: Re: [PATCH] crypto: gcm - fix cacheline sharing Message-ID: <20190529202728.GA35103@gmail.com> References: <1559149856-7938-1-git-send-email-iuliana.prodan@nxp.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1559149856-7938-1-git-send-email-iuliana.prodan@nxp.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Wed, May 29, 2019 at 08:10:56PM +0300, Iuliana Prodan wrote: > The generic GCM driver should ensure that whatever it passes into > scatterlists is safe for non-cache coherent DMA. > The issue was seen while running GCM on CAAM driver. But, since CAAM > does not support GHASH on i.MX6, only CTR skcipher part of the GCM is > offloaded. > The skcipher request received by CAAM has req->src pointing to > auth_tag[16] and req->iv pointing to iv[16]. Problem is that when > the iv is updated (crypto API requires skcipher implementations to > update the IV with the last ciphertext block) is written in iv[16], > which is on the same cacheline as auth_tag[16] that was previously > DMA mapped. > Solution is to use a pointer, aligned to cache line, instead of auth_tag > buffer, for encryption/decryption and then free it on completion. > > Link: https://lore.kernel.org/linux-crypto/20190208114459.5nixe76xmmkhur75@gondor.apana.org.au/ > Cc: # v4.19+ > Fixes: adcbc688fe2f ("crypto: gcm - Convert to new AEAD interface") > Suggested-by: Ard Biesheuvel > Signed-off-by: Iuliana Prodan > > --- > I've checked the reproducibility of this issue starting with 4.19.y. > --- > crypto/gcm.c | 26 +++++++++++++++++--------- > include/crypto/gcm.h | 1 + > 2 files changed, 18 insertions(+), 9 deletions(-) > > diff --git a/crypto/gcm.c b/crypto/gcm.c > index 33f45a9..53e3ce5 100644 > --- a/crypto/gcm.c > +++ b/crypto/gcm.c > @@ -66,7 +66,7 @@ struct crypto_gcm_ghash_ctx { > > struct crypto_gcm_req_priv_ctx { > u8 iv[16]; > - u8 auth_tag[16]; > + u8 *auth_tag; > u8 iauth_tag[16]; > struct scatterlist src[3]; > struct scatterlist dst[3]; > @@ -177,19 +177,23 @@ static void crypto_gcm_init_common(struct aead_request *req) > __be32 counter = cpu_to_be32(1); > struct scatterlist *sg; > > - memset(pctx->auth_tag, 0, sizeof(pctx->auth_tag)); > + /* > + * kzalloc alignment is at least the cache line size > + * for non-cache coherent architectures. > + */ > + pctx->auth_tag = kzalloc(GCM_MAX_AUTH_SIZE, GFP_KERNEL); > memcpy(pctx->iv, req->iv, GCM_AES_IV_SIZE); > memcpy(pctx->iv + GCM_AES_IV_SIZE, &counter, 4); > > sg_init_table(pctx->src, 3); > - sg_set_buf(pctx->src, pctx->auth_tag, sizeof(pctx->auth_tag)); > + sg_set_buf(pctx->src, pctx->auth_tag, GCM_MAX_AUTH_SIZE); > sg = scatterwalk_ffwd(pctx->src + 1, req->src, req->assoclen); > if (sg != pctx->src + 1) > sg_chain(pctx->src, 2, sg); > > if (req->src != req->dst) { > sg_init_table(pctx->dst, 3); > - sg_set_buf(pctx->dst, pctx->auth_tag, sizeof(pctx->auth_tag)); > + sg_set_buf(pctx->dst, pctx->auth_tag, GCM_MAX_AUTH_SIZE); > sg = scatterwalk_ffwd(pctx->dst + 1, req->dst, req->assoclen); > if (sg != pctx->dst + 1) > sg_chain(pctx->dst, 2, sg); > @@ -208,9 +212,8 @@ static void crypto_gcm_init_crypt(struct aead_request *req, > dst = req->src == req->dst ? pctx->src : pctx->dst; > > skcipher_request_set_tfm(skreq, ctx->ctr); > - skcipher_request_set_crypt(skreq, pctx->src, dst, > - cryptlen + sizeof(pctx->auth_tag), > - pctx->iv); > + skcipher_request_set_crypt(skreq, pctx->src, dst, cryptlen + > + GCM_MAX_AUTH_SIZE, pctx->iv); > } > > static inline unsigned int gcm_remain(unsigned int len) > @@ -440,6 +443,7 @@ static int gcm_enc_copy_hash(struct aead_request *req, u32 flags) > scatterwalk_map_and_copy(auth_tag, req->dst, > req->assoclen + req->cryptlen, > crypto_aead_authsize(aead), 1); > + kfree(auth_tag); > return 0; > } > > @@ -492,11 +496,15 @@ static int crypto_gcm_verify(struct aead_request *req) > u8 *iauth_tag = pctx->iauth_tag; > unsigned int authsize = crypto_aead_authsize(aead); > unsigned int cryptlen = req->cryptlen - authsize; > + int err; > > crypto_xor(auth_tag, iauth_tag, 16); > scatterwalk_map_and_copy(iauth_tag, req->src, > req->assoclen + cryptlen, authsize, 0); > - return crypto_memneq(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0; > + err = crypto_memneq(iauth_tag, auth_tag, authsize) ? -EBADMSG : 0; > + kfree(auth_tag); > + > + return err; > } > So what about the other places that also pass an IV located next to the data, like crypto/ccm.c and crypto/adiantum.c? If we're actually going to make this a new API requirement, then we need to add a debugging option that makes the API detect this violation so that the other places can be fixed too. Also, doing a kmalloc() per requset is inefficient and very error-prone. In fact there are at least 3 bugs here: (1) not checking the return value, (2) incorrectly using GFP_KERNEL when it may be atomic context, and (3) not always freeing the memory. Why not use cacheline-aligned memory within the request context, so that a separate kmalloc() isn't needed? Also, did you consider whether there's any way to make the crypto API handle this automatically, so that all the individual users don't have to? - Eric