Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp473113pxb; Wed, 3 Mar 2021 07:47:07 -0800 (PST) X-Google-Smtp-Source: ABdhPJw3Ybn5jAd+h6uf/RSmEhrXxEXy4DFW0k9c61e8njJ29gceg0mmo/waPWCz09X6J7TVnGIA X-Received: by 2002:a17:906:71d3:: with SMTP id i19mr6075863ejk.347.1614786427527; Wed, 03 Mar 2021 07:47:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614786427; cv=none; d=google.com; s=arc-20160816; b=lsAcxHMY2Rq+R/9icgZHmbLqfTIaHVqya5/DZF5AqwVFhdpIFUZKyUGqWaFGgxnWPl uXeK+DbrIrDS1wHV3CYnNAVmUeS0sQKcKaMvekYeXwg9kkw/Wp3HlQKRk1fFIW9VXemo 31NpH5ZXEme4dsB0wxPj8qOKryWFhkVwJliGLealDWcN1SB5o0EObTolsE1FhCDzZal0 crGLLjg1JWk50fsEvo5iWFKsPPrQPEilzSgdsd4ApAOTi83HP0/YLMRvDpc72BoJWJ1Q +6MrQgQHznXsj4QNogb7YaNhsURcthRZKhX9KO9drGrYE64aZhk+sz+tORpZiGoe4CkR ijPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=9LgiOclCXznXT/TwdDq+HxCPrL8Yu+eoKOdJ4iklvPg=; b=NbhO8GBYNlb6F/KslKdUQl9U23FDVSTfBn4f4jQOjy24ASMFvlSE4FTWz3x36A8WA9 AepEM0u5pDBbNwBjWFqxHzTe/KF80V7PSUwOtNDtVJNIn7zrxAMECm23oWcrwxdlxmow JF+Bzr527SpnKE5GRtZSo8T1c2/5LXZTHjoUdpp6Xd5UvBpc+8Y9ubJB88Zhhk2btRv4 ocI08RXZLcRV5oNyA3s+qFn3AcjJLZnVovz/oFI3Vkq+kNzyiUcyd7OYnjzvsgrskx/b s4o6Eqf+GE6Ngpaw2jcl61sLo99Kx7+qIMFkMPb/jvfk+UNYX8pPEYxbyF/NOhMwSwsE wj9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=EN7N+e9+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m9si15522547edl.120.2021.03.03.07.46.11; Wed, 03 Mar 2021 07:47:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=EN7N+e9+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238618AbhCAWoS (ORCPT + 99 others); Mon, 1 Mar 2021 17:44:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:40424 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238972AbhCARtB (ORCPT ); Mon, 1 Mar 2021 12:49:01 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id ED1E0650E7; Mon, 1 Mar 2021 16:59:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1614617992; bh=6j15Ce2s91vT7FWWYKWWUwlIWQpLSg66A8yBkMaSYkU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EN7N+e9+jX2HyAOsD/9N3iFzDlMcDhqjLL6eBvyYnntEJBfvfzKVd7XDiosrmIVoB x6CcBgtDxSUPSJ4FTUCVGxqj7XG8wJBhpZkZXZvPD8OENofy82HktdugIdhOZr17TK wrDnwyc/CYLpVZsOFxfnaV5JQt1cpCc8rYNEibz8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ard Biesheuvel , Herbert Xu Subject: [PATCH 5.4 270/340] crypto: aesni - prevent misaligned buffers on the stack Date: Mon, 1 Mar 2021 17:13:34 +0100 Message-Id: <20210301161101.582933081@linuxfoundation.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301161048.294656001@linuxfoundation.org> References: <20210301161048.294656001@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ard Biesheuvel commit a13ed1d15b07a04b1f74b2df61ff7a5e47f45dd8 upstream. The GCM mode driver uses 16 byte aligned buffers on the stack to pass the IV to the asm helpers, but unfortunately, the x86 port does not guarantee that the stack pointer is 16 byte aligned upon entry in the first place. Since the compiler is not aware of this, it will not emit the additional stack realignment sequence that is needed, and so the alignment is not guaranteed to be more than 8 bytes. So instead, allocate some padding on the stack, and realign the IV pointer by hand. Cc: Signed-off-by: Ard Biesheuvel Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman --- arch/x86/crypto/aesni-intel_glue.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -707,7 +707,8 @@ static int gcmaes_crypt_by_sg(bool enc, struct crypto_aead *tfm = crypto_aead_reqtfm(req); unsigned long auth_tag_len = crypto_aead_authsize(tfm); const struct aesni_gcm_tfm_s *gcm_tfm = aesni_gcm_tfm; - struct gcm_context_data data AESNI_ALIGN_ATTR; + u8 databuf[sizeof(struct gcm_context_data) + (AESNI_ALIGN - 8)] __aligned(8); + struct gcm_context_data *data = PTR_ALIGN((void *)databuf, AESNI_ALIGN); struct scatter_walk dst_sg_walk = {}; unsigned long left = req->cryptlen; unsigned long len, srclen, dstlen; @@ -760,8 +761,7 @@ static int gcmaes_crypt_by_sg(bool enc, } kernel_fpu_begin(); - gcm_tfm->init(aes_ctx, &data, iv, - hash_subkey, assoc, assoclen); + gcm_tfm->init(aes_ctx, data, iv, hash_subkey, assoc, assoclen); if (req->src != req->dst) { while (left) { src = scatterwalk_map(&src_sg_walk); @@ -771,10 +771,10 @@ static int gcmaes_crypt_by_sg(bool enc, len = min(srclen, dstlen); if (len) { if (enc) - gcm_tfm->enc_update(aes_ctx, &data, + gcm_tfm->enc_update(aes_ctx, data, dst, src, len); else - gcm_tfm->dec_update(aes_ctx, &data, + gcm_tfm->dec_update(aes_ctx, data, dst, src, len); } left -= len; @@ -792,10 +792,10 @@ static int gcmaes_crypt_by_sg(bool enc, len = scatterwalk_clamp(&src_sg_walk, left); if (len) { if (enc) - gcm_tfm->enc_update(aes_ctx, &data, + gcm_tfm->enc_update(aes_ctx, data, src, src, len); else - gcm_tfm->dec_update(aes_ctx, &data, + gcm_tfm->dec_update(aes_ctx, data, src, src, len); } left -= len; @@ -804,7 +804,7 @@ static int gcmaes_crypt_by_sg(bool enc, scatterwalk_done(&src_sg_walk, 1, left); } } - gcm_tfm->finalize(aes_ctx, &data, authTag, auth_tag_len); + gcm_tfm->finalize(aes_ctx, data, authTag, auth_tag_len); kernel_fpu_end(); if (!assocmem) @@ -853,7 +853,8 @@ static int helper_rfc4106_encrypt(struct struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm); void *aes_ctx = &(ctx->aes_key_expanded); - u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN))); + u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); + u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); unsigned int i; __be32 counter = cpu_to_be32(1); @@ -880,7 +881,8 @@ static int helper_rfc4106_decrypt(struct struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm); void *aes_ctx = &(ctx->aes_key_expanded); - u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN))); + u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); + u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); unsigned int i; if (unlikely(req->assoclen != 16 && req->assoclen != 20)) @@ -1010,7 +1012,8 @@ static int generic_gcmaes_encrypt(struct struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct generic_gcmaes_ctx *ctx = generic_gcmaes_ctx_get(tfm); void *aes_ctx = &(ctx->aes_key_expanded); - u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN))); + u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); + u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); __be32 counter = cpu_to_be32(1); memcpy(iv, req->iv, 12); @@ -1026,7 +1029,8 @@ static int generic_gcmaes_decrypt(struct struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct generic_gcmaes_ctx *ctx = generic_gcmaes_ctx_get(tfm); void *aes_ctx = &(ctx->aes_key_expanded); - u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN))); + u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); + u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); memcpy(iv, req->iv, 12); *((__be32 *)(iv+12)) = counter;