Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp617109img; Fri, 22 Mar 2019 05:14:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqxnWGZ8TxwyGIivxYqadW03cCeJE0DCIi9fgPBw+zjIkEnjQOXuj1Uof5CdosiJLq9VLLVF X-Received: by 2002:a62:4754:: with SMTP id u81mr8992639pfa.66.1553256878915; Fri, 22 Mar 2019 05:14:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553256878; cv=none; d=google.com; s=arc-20160816; b=x71hLWUk/PDa1SKsOSH4eKHyPf2GN/4deZj8B2UWUVHrb+/ZRtbmM9eQu97mYsF0kJ Xl/4ueV+9FPl61R55lv/XKQ0xxo8Tgr/gI8WwQqrDonUuLv9pOv6m6R/QvlSlq1U59E9 Opq+1rxUVepj3DnA6m4NwibS/PClOKcP1+abzHp5lmCqrO84px3bDWrewsAG5NsB0Ku5 ju1NjV6Qoq9m/YeELiLgCvKO8JOe2M0aGH6XGn/u65yvOiZLoP72S5vxZ7um85wUyPCG +AAh4kA4Sz7LiNUNS+99CZhmUrZzZC4jssiBi2I5BMpIkWDndU9IIWP5di6/SPqwu7KR 7mZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=xv0pNApjiaRfT/Rq5fkl+cSebQoLZ1xv/b9IhTEHsFA=; b=UnvvF0w/Tzcd2tCme3BntZcaXHKEOeARMPcA5nRGsCD/HEzcKEtLih8cK+YlMqzJtS TR4SREZ0xgBc4qSX31j02OhLkUZqjzVXG0HfHikSA8prxHeWFd7Vvg803yzIxO0aNYTm SxvpO443xdsVOENJJhN5IFwH8+sJpfcXvk4slw9/bDJNTZj9331UTydDzb2ZbdhZNQK5 ZfKJkMvlVo8ExRKAN5m5nSLDaFrcBa86LoAXk/PKBHeBaL+pxOwFhVsAPpbIcO8SCgUj +iGngXv3sdyoBezDR5RmZu45p6WBt8+3m0NzHFkmlj36Bm2eCBdiNvDM4Se4GJl86wJJ zuQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ILvjnYaR; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 59si7222666plb.405.2019.03.22.05.14.24; Fri, 22 Mar 2019 05:14:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ILvjnYaR; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388565AbfCVMNt (ORCPT + 99 others); Fri, 22 Mar 2019 08:13:49 -0400 Received: from mail.kernel.org ([198.145.29.99]:52046 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389807AbfCVMNq (ORCPT ); Fri, 22 Mar 2019 08:13:46 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DE26B20830; Fri, 22 Mar 2019 12:13:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1553256825; bh=9Jctmm5+BIvXzDNxS51eYe+c8Mn1wHB/aNy+PNMtbQU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ILvjnYaRZ+TiBxvcKqkZAqBk00LU8HwFhz+LS9XtONEZKDdxaj1sZZZALfxLbDtm7 4XS2adbjiTP0jp939bOv9mPPxp1tKeFbVOEVJZDdIlkQ8DYd04Lva/VYXIteD1kmnX O+4ezK384SDaOIcLKnCh36bKVO43KT+2IzZ+4BXI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ondrej Mosnacek , Eric Biggers , Herbert Xu Subject: [PATCH 5.0 048/238] crypto: x86/morus - fix handling chunked inputs and MAY_SLEEP Date: Fri, 22 Mar 2019 12:14:27 +0100 Message-Id: <20190322111301.231522715@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190322111258.383569278@linuxfoundation.org> References: <20190322111258.383569278@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 5.0-stable review patch. If anyone has any objections, please let me know. ------------------ From: Eric Biggers commit 2060e284e9595fc3baed6e035903c05b93266555 upstream. The x86 MORUS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Also, when the CRYPTO_TFM_REQ_MAY_SLEEP flag is given, they can incorrectly sleep in the skcipher_walk_*() functions while preemption has been disabled by kernel_fpu_begin(). Fix these bugs. Fixes: 56e8e57fc3a7 ("crypto: morus - Add common SIMD glue code for MORUS") Cc: # v4.18+ Cc: Ondrej Mosnacek Signed-off-by: Eric Biggers Reviewed-by: Ondrej Mosnacek Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman --- arch/x86/crypto/morus1280_glue.c | 40 +++++++++++++++------------------------ arch/x86/crypto/morus640_glue.c | 39 ++++++++++++++------------------------ 2 files changed, 31 insertions(+), 48 deletions(-) --- a/arch/x86/crypto/morus1280_glue.c +++ b/arch/x86/crypto/morus1280_glue.c @@ -85,31 +85,20 @@ static void crypto_morus1280_glue_proces static void crypto_morus1280_glue_process_crypt(struct morus1280_state *state, struct morus1280_ops ops, - struct aead_request *req) + struct skcipher_walk *walk) { - struct skcipher_walk walk; - u8 *cursor_src, *cursor_dst; - unsigned int chunksize, base; - - ops.skcipher_walk_init(&walk, req, false); - - while (walk.nbytes) { - cursor_src = walk.src.virt.addr; - cursor_dst = walk.dst.virt.addr; - chunksize = walk.nbytes; - - ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize); - - base = chunksize & ~(MORUS1280_BLOCK_SIZE - 1); - cursor_src += base; - cursor_dst += base; - chunksize &= MORUS1280_BLOCK_SIZE - 1; - - if (chunksize > 0) - ops.crypt_tail(state, cursor_src, cursor_dst, - chunksize); + while (walk->nbytes >= MORUS1280_BLOCK_SIZE) { + ops.crypt_blocks(state, walk->src.virt.addr, + walk->dst.virt.addr, + round_down(walk->nbytes, + MORUS1280_BLOCK_SIZE)); + skcipher_walk_done(walk, walk->nbytes % MORUS1280_BLOCK_SIZE); + } - skcipher_walk_done(&walk, 0); + if (walk->nbytes) { + ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr, + walk->nbytes); + skcipher_walk_done(walk, 0); } } @@ -147,12 +136,15 @@ static void crypto_morus1280_glue_crypt( struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct morus1280_ctx *ctx = crypto_aead_ctx(tfm); struct morus1280_state state; + struct skcipher_walk walk; + + ops.skcipher_walk_init(&walk, req, true); kernel_fpu_begin(); ctx->ops->init(&state, &ctx->key, req->iv); crypto_morus1280_glue_process_ad(&state, ctx->ops, req->src, req->assoclen); - crypto_morus1280_glue_process_crypt(&state, ops, req); + crypto_morus1280_glue_process_crypt(&state, ops, &walk); ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen); kernel_fpu_end(); --- a/arch/x86/crypto/morus640_glue.c +++ b/arch/x86/crypto/morus640_glue.c @@ -85,31 +85,19 @@ static void crypto_morus640_glue_process static void crypto_morus640_glue_process_crypt(struct morus640_state *state, struct morus640_ops ops, - struct aead_request *req) + struct skcipher_walk *walk) { - struct skcipher_walk walk; - u8 *cursor_src, *cursor_dst; - unsigned int chunksize, base; - - ops.skcipher_walk_init(&walk, req, false); - - while (walk.nbytes) { - cursor_src = walk.src.virt.addr; - cursor_dst = walk.dst.virt.addr; - chunksize = walk.nbytes; - - ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize); - - base = chunksize & ~(MORUS640_BLOCK_SIZE - 1); - cursor_src += base; - cursor_dst += base; - chunksize &= MORUS640_BLOCK_SIZE - 1; - - if (chunksize > 0) - ops.crypt_tail(state, cursor_src, cursor_dst, - chunksize); + while (walk->nbytes >= MORUS640_BLOCK_SIZE) { + ops.crypt_blocks(state, walk->src.virt.addr, + walk->dst.virt.addr, + round_down(walk->nbytes, MORUS640_BLOCK_SIZE)); + skcipher_walk_done(walk, walk->nbytes % MORUS640_BLOCK_SIZE); + } - skcipher_walk_done(&walk, 0); + if (walk->nbytes) { + ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr, + walk->nbytes); + skcipher_walk_done(walk, 0); } } @@ -143,12 +131,15 @@ static void crypto_morus640_glue_crypt(s struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct morus640_ctx *ctx = crypto_aead_ctx(tfm); struct morus640_state state; + struct skcipher_walk walk; + + ops.skcipher_walk_init(&walk, req, true); kernel_fpu_begin(); ctx->ops->init(&state, &ctx->key, req->iv); crypto_morus640_glue_process_ad(&state, ctx->ops, req->src, req->assoclen); - crypto_morus640_glue_process_crypt(&state, ops, req); + crypto_morus640_glue_process_crypt(&state, ops, &walk); ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen); kernel_fpu_end();