Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3075186ybt; Mon, 29 Jun 2020 14:39:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzUYmSnlwWO639A2fwwlYyEmZTGTsiesly0lHCDQ8SB/yFUty/hCfchg5a8yyVLj8RoAUpE X-Received: by 2002:a17:906:4f16:: with SMTP id t22mr15415106eju.179.1593466778291; Mon, 29 Jun 2020 14:39:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593466778; cv=none; d=google.com; s=arc-20160816; b=zBdRaYf0vRsRJnT5kwKQXil01rEZhJ3Fe0KhxX1uiyM6VaDQD5dYqpUAxTXBA4fx3/ F8idt0k/rLvKPcLlFQtNML+6FAMbiCN9AUPPf8RecbBmH14KcO+ZHKeCTeICzC3H/94x ZDT6o5It1DRPTbj5GM1K/GsjFa1ZdC+8RsT2s1bjysDmflN5la/nvQL6+2HBHYRfHToY jTrxmIkN3EEI0SVNrrkCy9aDyfU/7SkfoXxd9kd86x6Io7+bwl4hPe5JOE/lUOOgt59r 1KYjHfpsE08hRZ594IaajnLs6777VHamaJGXCeIyrh5xeIyleUHcul2LFkDK2vEKjwse MpXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=mfQtzl/ohRpTAYIvl/G/L4PoKKGU4jSSdsn1Zup4rM4=; b=dMSFBKxvX84NYphZ+i4DT5CY5A7i+raYkhrYR2rArXXlDhR4CAsuBFxplnS6g/y1ov DpHocCTfx4FDuA50Fn4y6ATfi/jmh5pr+/IdNTWZg8eF24DD5YnMvo0lPWpfjSSZpfxE E6FpqlnlxOFE+Wp5trgeKKonhzyHAT9FJJ61y7wEo/v8pgDvvbrUMltXLJgHOptCKJ6X anjj3ZVKbu7z9flQ9CMNldSoexZjO8hMKNNvIxcGZgZrpE7pNK8INT7H1rIcs3xPlPir HA5fyLo+7e7aQ3YnFzhSUDbzLKpKgAXv8xNMT+Ez1NpmOzFlNT32LNczz1bqIIHQmcAY OCvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=zn3axqxt; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o16si477751edv.45.2020.06.29.14.39.14; Mon, 29 Jun 2020 14:39:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=zn3axqxt; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728356AbgF2Vih (ORCPT + 99 others); Mon, 29 Jun 2020 17:38:37 -0400 Received: from mail.kernel.org ([198.145.29.99]:60644 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728357AbgF2Skd (ORCPT ); Mon, 29 Jun 2020 14:40:33 -0400 Received: from localhost.localdomain (82-64-249-211.subs.proxad.net [82.64.249.211]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 60E2423332; Mon, 29 Jun 2020 07:39:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593416386; bh=1lVAbp/UQbFBMKlA8XOV50C3oMXVrVeEBOUNHDf/JDg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zn3axqxtue0dZKx1y+9aereVNpaMWWsPBMGeeTzVK6O8zbB6czukuvJW0VPGBplLm cdN2POyjPvrFCYnzMNiC8FHrz5Bx7pLAeycHRF4KbKDb4BaB0ITH+P0wL9/gk8JDff LEVQ+fCoaNbX6Q7FOt+cxGUXWlC04gJ9S6n9Dots= From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, Ard Biesheuvel Subject: [PATCH 4/5] crypto: arm64/gcm - use inline helper to suppress indirect calls Date: Mon, 29 Jun 2020 09:39:24 +0200 Message-Id: <20200629073925.127538-5-ardb@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200629073925.127538-1-ardb@kernel.org> References: <20200629073925.127538-1-ardb@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Introduce an inline wrapper for ghash_do_update() that incorporates the indirect call to the asm routine that is passed as an argument, and keep the non-SIMD fallback code out of line. This ensures that all references to the function pointer are inlined where the address is taken, removing the need for any indirect calls to begin with. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/ghash-ce-glue.c | 85 +++++++++++--------- 1 file changed, 46 insertions(+), 39 deletions(-) diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index 2ae95dcf648f..da1034867aaa 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -69,36 +69,43 @@ static int ghash_init(struct shash_desc *desc) } static void ghash_do_update(int blocks, u64 dg[], const char *src, - struct ghash_key *key, const char *head, - void (*simd_update)(int blocks, u64 dg[], - const char *src, - u64 const h[][2], - const char *head)) + struct ghash_key *key, const char *head) { - if (likely(crypto_simd_usable() && simd_update)) { + be128 dst = { cpu_to_be64(dg[1]), cpu_to_be64(dg[0]) }; + + do { + const u8 *in = src; + + if (head) { + in = head; + blocks++; + head = NULL; + } else { + src += GHASH_BLOCK_SIZE; + } + + crypto_xor((u8 *)&dst, in, GHASH_BLOCK_SIZE); + gf128mul_lle(&dst, &key->k); + } while (--blocks); + + dg[0] = be64_to_cpu(dst.b); + dg[1] = be64_to_cpu(dst.a); +} + +static __always_inline +void ghash_do_simd_update(int blocks, u64 dg[], const char *src, + struct ghash_key *key, const char *head, + void (*simd_update)(int blocks, u64 dg[], + const char *src, + u64 const h[][2], + const char *head)) +{ + if (likely(crypto_simd_usable())) { kernel_neon_begin(); simd_update(blocks, dg, src, key->h, head); kernel_neon_end(); } else { - be128 dst = { cpu_to_be64(dg[1]), cpu_to_be64(dg[0]) }; - - do { - const u8 *in = src; - - if (head) { - in = head; - blocks++; - head = NULL; - } else { - src += GHASH_BLOCK_SIZE; - } - - crypto_xor((u8 *)&dst, in, GHASH_BLOCK_SIZE); - gf128mul_lle(&dst, &key->k); - } while (--blocks); - - dg[0] = be64_to_cpu(dst.b); - dg[1] = be64_to_cpu(dst.a); + ghash_do_update(blocks, dg, src, key, head); } } @@ -131,9 +138,9 @@ static int ghash_update(struct shash_desc *desc, const u8 *src, do { int chunk = min(blocks, MAX_BLOCKS); - ghash_do_update(chunk, ctx->digest, src, key, - partial ? ctx->buf : NULL, - pmull_ghash_update_p8); + ghash_do_simd_update(chunk, ctx->digest, src, key, + partial ? ctx->buf : NULL, + pmull_ghash_update_p8); blocks -= chunk; src += chunk * GHASH_BLOCK_SIZE; @@ -155,8 +162,8 @@ static int ghash_final(struct shash_desc *desc, u8 *dst) memset(ctx->buf + partial, 0, GHASH_BLOCK_SIZE - partial); - ghash_do_update(1, ctx->digest, ctx->buf, key, NULL, - pmull_ghash_update_p8); + ghash_do_simd_update(1, ctx->digest, ctx->buf, key, NULL, + pmull_ghash_update_p8); } put_unaligned_be64(ctx->digest[1], dst); put_unaligned_be64(ctx->digest[0], dst + 8); @@ -280,9 +287,9 @@ static void gcm_update_mac(u64 dg[], const u8 *src, int count, u8 buf[], if (count >= GHASH_BLOCK_SIZE || *buf_count == GHASH_BLOCK_SIZE) { int blocks = count / GHASH_BLOCK_SIZE; - ghash_do_update(blocks, dg, src, &ctx->ghash_key, - *buf_count ? buf : NULL, - pmull_ghash_update_p64); + ghash_do_simd_update(blocks, dg, src, &ctx->ghash_key, + *buf_count ? buf : NULL, + pmull_ghash_update_p64); src += blocks * GHASH_BLOCK_SIZE; count %= GHASH_BLOCK_SIZE; @@ -326,8 +333,8 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u64 dg[]) if (buf_count) { memset(&buf[buf_count], 0, GHASH_BLOCK_SIZE - buf_count); - ghash_do_update(1, dg, buf, &ctx->ghash_key, NULL, - pmull_ghash_update_p64); + ghash_do_simd_update(1, dg, buf, &ctx->ghash_key, NULL, + pmull_ghash_update_p64); } } @@ -403,7 +410,7 @@ static int gcm_encrypt(struct aead_request *req) } while (--remaining > 0); ghash_do_update(blocks, dg, walk.dst.virt.addr, - &ctx->ghash_key, NULL, NULL); + &ctx->ghash_key, NULL); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); @@ -422,7 +429,7 @@ static int gcm_encrypt(struct aead_request *req) tag = (u8 *)&lengths; ghash_do_update(1, dg, tag, &ctx->ghash_key, - walk.nbytes ? buf : NULL, NULL); + walk.nbytes ? buf : NULL); if (walk.nbytes) err = skcipher_walk_done(&walk, 0); @@ -507,7 +514,7 @@ static int gcm_decrypt(struct aead_request *req) u8 *dst = walk.dst.virt.addr; ghash_do_update(blocks, dg, walk.src.virt.addr, - &ctx->ghash_key, NULL, NULL); + &ctx->ghash_key, NULL); do { aes_encrypt(&ctx->aes_key, buf, iv); @@ -530,7 +537,7 @@ static int gcm_decrypt(struct aead_request *req) tag = (u8 *)&lengths; ghash_do_update(1, dg, tag, &ctx->ghash_key, - walk.nbytes ? buf : NULL, NULL); + walk.nbytes ? buf : NULL); if (walk.nbytes) { aes_encrypt(&ctx->aes_key, buf, iv); -- 2.20.1