Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1428684ybt; Thu, 25 Jun 2020 06:02:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyQVCJ9xMs/g/X2f6i+FFLe8WkdY6lD7f4nHUSoclKuQio9tp+R+2f2f2ZGcU7fU5tzwHX4 X-Received: by 2002:a17:906:c40d:: with SMTP id u13mr8577589ejz.519.1593090152337; Thu, 25 Jun 2020 06:02:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593090152; cv=none; d=google.com; s=arc-20160816; b=LwQUK5/PD7J+PAsNJOn1yRGswL2+jkqBN9VqSWceNBX1z9q/xGLdGIm/VQhzcsvJWk sl6ewWiOc7u7WAy2dnipwZCpWNYbBoD4zQFpwgNR8CBx6j9ubjjhbZlPCfUTeZjqWlNn ww8uFxwD/jylcsy3qkPp60YZUQ3TK9JtfOpZiYRjaqPUGswt7nVynD3bxYCi+DQpkSe2 SWNh8mgHI7k+lwuUCHwYAb5UhpPiYMv5IyJ/1fLVHCj6qjBohAoeZyRO1AXljAPOEppB FRygezJ+Fw5QjnnHjQAWUQ7BBx9mYMxsblx5Y+NqXZ2MqncNta6zwf0JWVgNJ3cn2IgA K1fQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=6Bt2BYLnw5m4XSA1izWHB0Tbm2AgrWALKXgIyHsddeI=; b=lTdaS5FsQcWpSdeQnmN/bG8JHsBRv2GL73IsJkrxPUVH1LqssUwyefSVsUz4H2LN5i Vk260rQflMlZi+bx/VZyChjD6QTib1TSnW6Jl0Y1KS7ogXBkZhbvB/DIDl7A1N5fGI+C +nmkl9fBv8QYr/Oi4erqC85xE5alxcachlBmcyNE9CuCRHNBciKG1QTnv9P7xO6IJPRW elOqOXR17fZ/fIfX8q564wv4V38HBYKNERMswo4Z8Z0j2faNr2jHxRMxVEyBAsqsERu0 uAb3qP+pPWmN5bs7ch4FJmlbEhDKBOF7rCOu5O/eNavmQ5w28zC3WeuTvhxKvrDOZvcK etmg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e24si11020250edy.323.2020.06.25.06.02.07; Thu, 25 Jun 2020 06:02:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404854AbgFYM73 (ORCPT + 99 others); Thu, 25 Jun 2020 08:59:29 -0400 Received: from mga04.intel.com ([192.55.52.120]:8441 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404845AbgFYM73 (ORCPT ); Thu, 25 Jun 2020 08:59:29 -0400 IronPort-SDR: ZLRdp4VwoQysuXK0020zv941ONgB68VpO2k2p1Ti5vLqCBmQfcdT4kBAucVFQQ0zaT+aCriSDk WJtsvxBnbm8w== X-IronPort-AV: E=McAfee;i="6000,8403,9662"; a="142362256" X-IronPort-AV: E=Sophos;i="5.75,279,1589266800"; d="scan'208";a="142362256" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2020 05:59:29 -0700 IronPort-SDR: 1nx8twMblILCoMpfckcs4tBmctfXp1zAj/gjAWu5YIVTJHG63sKh6AsYTipxavLergPOs7E1lR ooZ7WGyFg2nA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,279,1589266800"; d="scan'208";a="301979133" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.51]) by fmsmga004.fm.intel.com with ESMTP; 25 Jun 2020 05:59:28 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, Giovanni Cabiddu Subject: [PATCH 4/4] crypto: qat - fallback for xts with 192 bit keys Date: Thu, 25 Jun 2020 13:59:04 +0100 Message-Id: <20200625125904.142840-5-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200625125904.142840-1-giovanni.cabiddu@intel.com> References: <20200625125904.142840-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Forward requests to another provider if the key length is 192 bits as this is not supported by the QAT accelerators. This fixes the following issue reported by the extra self test: alg: skcipher: qat_aes_xts setkey failed on test vector "random: len=3204 klen=48"; expected_error=0, actual_error=-22, flags=0x1 Signed-off-by: Giovanni Cabiddu --- drivers/crypto/qat/qat_common/qat_algs.c | 67 ++++++++++++++++++++++-- 1 file changed, 64 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index 77bdff0118f7..82c801ec1680 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -88,6 +88,8 @@ struct qat_alg_skcipher_ctx { struct icp_qat_fw_la_bulk_req enc_fw_req; struct icp_qat_fw_la_bulk_req dec_fw_req; struct qat_crypto_instance *inst; + struct crypto_skcipher *ftfm; + bool fallback; }; static int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg) @@ -994,12 +996,25 @@ static int qat_alg_skcipher_ctr_setkey(struct crypto_skcipher *tfm, static int qat_alg_skcipher_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { + struct qat_alg_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); int ret; ret = xts_verify_key(tfm, key, keylen); if (ret) return ret; + if (keylen >> 1 == AES_KEYSIZE_192) { + ret = crypto_skcipher_setkey(ctx->ftfm, key, keylen); + if (ret) + return ret; + + ctx->fallback = true; + + return 0; + } + + ctx->fallback = false; + return qat_alg_skcipher_setkey(tfm, key, keylen, ICP_QAT_HW_CIPHER_XTS_MODE); } @@ -1066,9 +1081,19 @@ static int qat_alg_skcipher_blk_encrypt(struct skcipher_request *req) static int qat_alg_skcipher_xts_encrypt(struct skcipher_request *req) { + struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(req); + struct qat_alg_skcipher_ctx *ctx = crypto_skcipher_ctx(stfm); + struct skcipher_request *nreq = skcipher_request_ctx(req); + if (req->cryptlen < XTS_BLOCK_SIZE) return -EINVAL; + if (ctx->fallback) { + memcpy(nreq, req, sizeof(*req)); + skcipher_request_set_tfm(nreq, ctx->ftfm); + return crypto_skcipher_encrypt(nreq); + } + return qat_alg_skcipher_encrypt(req); } @@ -1134,9 +1159,19 @@ static int qat_alg_skcipher_blk_decrypt(struct skcipher_request *req) static int qat_alg_skcipher_xts_decrypt(struct skcipher_request *req) { + struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(req); + struct qat_alg_skcipher_ctx *ctx = crypto_skcipher_ctx(stfm); + struct skcipher_request *nreq = skcipher_request_ctx(req); + if (req->cryptlen < XTS_BLOCK_SIZE) return -EINVAL; + if (ctx->fallback) { + memcpy(nreq, req, sizeof(*req)); + skcipher_request_set_tfm(nreq, ctx->ftfm); + return crypto_skcipher_decrypt(nreq); + } + return qat_alg_skcipher_decrypt(req); } @@ -1200,6 +1235,23 @@ static int qat_alg_skcipher_init_tfm(struct crypto_skcipher *tfm) return 0; } +static int qat_alg_skcipher_init_xts_tfm(struct crypto_skcipher *tfm) +{ + struct qat_alg_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); + int reqsize; + + ctx->ftfm = crypto_alloc_skcipher("xts(aes)", 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(ctx->ftfm)) + return(PTR_ERR(ctx->ftfm)); + + reqsize = max(sizeof(struct qat_crypto_request), + sizeof(struct skcipher_request) + + crypto_skcipher_reqsize(ctx->ftfm)); + crypto_skcipher_set_reqsize(tfm, reqsize); + + return 0; +} + static void qat_alg_skcipher_exit_tfm(struct crypto_skcipher *tfm) { struct qat_alg_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); @@ -1227,6 +1279,15 @@ static void qat_alg_skcipher_exit_tfm(struct crypto_skcipher *tfm) qat_crypto_put_instance(inst); } +static void qat_alg_skcipher_exit_xts_tfm(struct crypto_skcipher *tfm) +{ + struct qat_alg_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (ctx->ftfm) + crypto_free_skcipher(ctx->ftfm); + + qat_alg_skcipher_exit_tfm(tfm); +} static struct aead_alg qat_aeads[] = { { .base = { @@ -1321,14 +1382,14 @@ static struct skcipher_alg qat_skciphers[] = { { .base.cra_name = "xts(aes)", .base.cra_driver_name = "qat_aes_xts", .base.cra_priority = 4001, - .base.cra_flags = CRYPTO_ALG_ASYNC, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .base.cra_blocksize = AES_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct qat_alg_skcipher_ctx), .base.cra_alignmask = 0, .base.cra_module = THIS_MODULE, - .init = qat_alg_skcipher_init_tfm, - .exit = qat_alg_skcipher_exit_tfm, + .init = qat_alg_skcipher_init_xts_tfm, + .exit = qat_alg_skcipher_exit_xts_tfm, .setkey = qat_alg_skcipher_xts_setkey, .decrypt = qat_alg_skcipher_xts_decrypt, .encrypt = qat_alg_skcipher_xts_encrypt, -- 2.26.2