Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3535816ybt; Tue, 30 Jun 2020 05:28:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxUuq45R4q9crcrIRiNc/eGsU8RFbdT9c3aC5Cl+vum1MqrVCJFdwSUhvS+MyDTs1FFLL91 X-Received: by 2002:a05:6402:306a:: with SMTP id bs10mr15082300edb.51.1593520090866; Tue, 30 Jun 2020 05:28:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593520090; cv=none; d=google.com; s=arc-20160816; b=EenKb1tsnyX0PFlYRFivbDSpFLkLNkWDLXkQN+goRV+2hozVqEeK+Gan0Y1nF2nHcE FszpJJ1w64qnfmSWss0gdX/StUX5NdyBCx9EunhvxZsO58oSz957c6ke7c8M+qoSdRYl pNQXYoL8L5bQmdjyRWEbb7AeC5NiLkYxRKblm9UE46ckejT48jE+znkqSNhSIi+aRnpN euyqqhUbkn+U0EZw8A1lC8dfipKIkfQBYvMV3rcOTI4PtvCdLWXwFxEJNdKQOReYcZ/h 4yKkFs4ZnwEB4xM3t25noGpVxXQXG6c5l7UfY2hcX73kbVRiYPuEEMu1N5wKmTITNA2n tW7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=fcvyjP+06pds82M5jW/OaSU9Xj5vx1Z0KE8Rl0ft9Bo=; b=UOXlFh4mO/oSYkyYIGShyZ63lunPVXJUfusJlWNaO5KLfy3XSvn+wCofM1KpBYHsYD vANrCo5cRQXD6qqpFu3rkxFynBdYhzKlP4bWWJcdKRvehqrOH9Sy26e+zNqW48HxvT8C rRS/Utwp8CMg4myMVcWhnUUT+Q4XRrARy4eweKsxPZ2Vzt7ETN9BPjLgxwnOqQMyT0IB Qgx0b6elq8pfHYdcByLug3FPLH6GnUYH1GiwhxEnFu1UuZzN+skA6fhsdDYXbwEVbt1p /E4dVWjfjmHjXuOmhHKqvZ9W2IEep05ugAw6khYXS1gYppE+/57Eu6Ak/ET98nCdjt6q kbcA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=rumBkc3h; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q2si1589942ejf.33.2020.06.30.05.27.46; Tue, 30 Jun 2020 05:28:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=rumBkc3h; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732818AbgF3MTk (ORCPT + 99 others); Tue, 30 Jun 2020 08:19:40 -0400 Received: from mail.kernel.org ([198.145.29.99]:36384 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732804AbgF3MTk (ORCPT ); Tue, 30 Jun 2020 08:19:40 -0400 Received: from e123331-lin.nice.arm.com (lfbn-nic-1-188-42.w2-15.abo.wanadoo.fr [2.15.37.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B2FE920774; Tue, 30 Jun 2020 12:19:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593519579; bh=58XfJfuPeqdMpm0BHhqXbun0mUmGjCTlKnfWU+pSRuc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rumBkc3hTatN+rENqeCm3lLE/zDlN6Z1Ha8jTa6h10tPUVaY6kLQJsoCBGW8AhY89 9RWyYLhXXBxPxjHyHT/8kmnjWPY9k/rFxXBxzxDLs6M1EQsYfzSnJKTvpf9VoGHzyY D8VjuLywJhQWbRyv9wvMtI9luJFhPUyJNdxirGh0= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Corentin Labbe , Herbert Xu , "David S. Miller" , Maxime Ripard , Chen-Yu Tsai , Tom Lendacky , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Shawn Guo , Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , NXP Linux Team , Jamie Iles , Eric Biggers , Tero Kristo , Matthias Brugger Subject: [PATCH v3 05/13] crypto: sun8i-ce - permit asynchronous skcipher as fallback Date: Tue, 30 Jun 2020 14:18:59 +0200 Message-Id: <20200630121907.24274-6-ardb@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200630121907.24274-1-ardb@kernel.org> References: <20200630121907.24274-1-ardb@kernel.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Even though the sun8i-ce driver implements asynchronous versions of ecb(aes) and cbc(aes), the fallbacks it allocates are required to be synchronous. Given that SIMD based software implementations are usually asynchronous as well, even though they rarely complete asynchronously (this typically only happens in cases where the request was made from softirq context, while SIMD was already in use in the task context that it interrupted), these implementations are disregarded, and either the generic C version or another table based version implemented in assembler is selected instead. Since falling back to synchronous AES is not only a performance issue, but potentially a security issue as well (due to the fact that table based AES is not time invariant), let's fix this, by allocating an ordinary skcipher as the fallback, and invoke it with the completion routine that was given to the outer request. Signed-off-by: Ard Biesheuvel --- drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c | 41 ++++++++++---------- drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h | 3 +- 2 files changed, 22 insertions(+), 22 deletions(-) diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c index a6abb701bfc6..82c99da24dfd 100644 --- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c +++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c @@ -58,23 +58,20 @@ static int sun8i_ce_cipher_fallback(struct skcipher_request *areq) #ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct sun8i_ce_alg_template *algt; -#endif - SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, op->fallback_tfm); -#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG algt = container_of(alg, struct sun8i_ce_alg_template, alg.skcipher); algt->stat_fb++; #endif - skcipher_request_set_sync_tfm(subreq, op->fallback_tfm); - skcipher_request_set_callback(subreq, areq->base.flags, NULL, NULL); - skcipher_request_set_crypt(subreq, areq->src, areq->dst, + skcipher_request_set_tfm(&rctx->fallback_req, op->fallback_tfm); + skcipher_request_set_callback(&rctx->fallback_req, areq->base.flags, + areq->base.complete, areq->base.data); + skcipher_request_set_crypt(&rctx->fallback_req, areq->src, areq->dst, areq->cryptlen, areq->iv); if (rctx->op_dir & CE_DECRYPTION) - err = crypto_skcipher_decrypt(subreq); + err = crypto_skcipher_decrypt(&rctx->fallback_req); else - err = crypto_skcipher_encrypt(subreq); - skcipher_request_zero(subreq); + err = crypto_skcipher_encrypt(&rctx->fallback_req); return err; } @@ -335,18 +332,20 @@ int sun8i_ce_cipher_init(struct crypto_tfm *tfm) algt = container_of(alg, struct sun8i_ce_alg_template, alg.skcipher); op->ce = algt->ce; - sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx); - - op->fallback_tfm = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); + op->fallback_tfm = crypto_alloc_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(op->fallback_tfm)) { dev_err(op->ce->dev, "ERROR: Cannot allocate fallback for %s %ld\n", name, PTR_ERR(op->fallback_tfm)); return PTR_ERR(op->fallback_tfm); } + sktfm->reqsize = sizeof(struct sun8i_cipher_req_ctx) + + crypto_skcipher_reqsize(op->fallback_tfm); + + dev_info(op->ce->dev, "Fallback for %s is %s\n", crypto_tfm_alg_driver_name(&sktfm->base), - crypto_tfm_alg_driver_name(crypto_skcipher_tfm(&op->fallback_tfm->base))); + crypto_tfm_alg_driver_name(crypto_skcipher_tfm(op->fallback_tfm))); op->enginectx.op.do_one_request = sun8i_ce_handle_cipher_request; op->enginectx.op.prepare_request = NULL; @@ -358,7 +357,7 @@ int sun8i_ce_cipher_init(struct crypto_tfm *tfm) return 0; error_pm: - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); return err; } @@ -370,7 +369,7 @@ void sun8i_ce_cipher_exit(struct crypto_tfm *tfm) memzero_explicit(op->key, op->keylen); kfree(op->key); } - crypto_free_sync_skcipher(op->fallback_tfm); + crypto_free_skcipher(op->fallback_tfm); pm_runtime_put_sync_suspend(op->ce->dev); } @@ -400,10 +399,10 @@ int sun8i_ce_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } int sun8i_ce_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, @@ -425,8 +424,8 @@ int sun8i_ce_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, if (!op->key) return -ENOMEM; - crypto_sync_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); - crypto_sync_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); + crypto_skcipher_clear_flags(op->fallback_tfm, CRYPTO_TFM_REQ_MASK); + crypto_skcipher_set_flags(op->fallback_tfm, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - return crypto_sync_skcipher_setkey(op->fallback_tfm, key, keylen); + return crypto_skcipher_setkey(op->fallback_tfm, key, keylen); } diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h index 0e9eac397e1b..4ac0f91e2800 100644 --- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h +++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h @@ -187,6 +187,7 @@ struct sun8i_ce_dev { struct sun8i_cipher_req_ctx { u32 op_dir; int flow; + struct skcipher_request fallback_req; // keep at the end }; /* @@ -202,7 +203,7 @@ struct sun8i_cipher_tfm_ctx { u32 *key; u32 keylen; struct sun8i_ce_dev *ce; - struct crypto_sync_skcipher *fallback_tfm; + struct crypto_skcipher *fallback_tfm; }; /* -- 2.17.1