Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp1157372pxb; Fri, 21 Jan 2022 11:04:27 -0800 (PST) X-Google-Smtp-Source: ABdhPJwE5zR8EEoRmbTsV9CWZZmLSAlSfbCxLAa3mVgERedctpFpkGBMkRNeEPmwAigUHDEqIo1I X-Received: by 2002:a05:6a00:23c2:b0:4c1:3c25:6b65 with SMTP id g2-20020a056a0023c200b004c13c256b65mr4719647pfc.78.1642791866815; Fri, 21 Jan 2022 11:04:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1642791866; cv=none; d=google.com; s=arc-20160816; b=o21C125fVde2UpEimEIyBfLKY/InBjVI6JqrNLUyg7+ROdz5shVR1qeRBJ+bQUmJY+ 8Ny9A7x9EKavzKwE/uPUCtHadkjDjzQ1+z5Ks3twkpUp41lU+L20/RhBbrw+BDHPSXCY yVeWZECBf3g35Y3VvOP8aKSAHwG+pmvRlRJ0wQ1SXfNg9KYLzQEhaxpDmDa/G/1S8BN2 RDZnMSE+ZaFHFxvAyoAk31tdxD1mETUzhrU9JTa0WjssD1YLpgNeCdtZwhY3A2ceErfB ZbqbE4r0H1T369YR8s++QrpeeDIjszO0Gik3JmqKcwv/l2DecH7Hx212LTGlw/Y7PNnZ HUAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=vGmqG99BQ60jT/lELl1Q1weSdjSGRVCSt8q/JCg4B2g=; b=NWiZ+ShF1i8Ci4f+2C2OnH90Ftv84aun6Jd8YUrIIkoH6Bkp01tu21KrINMHG9AeV4 VgivYSu78i8PyQbxSMXWdC6vhPluDU2FvrYRSMS+BZF9sDOIvaTff7lbfkDijcad2rDf jKWmUa0T383YO583vcEgiohfk7AUUK6oyN9A2J/VWIxmICUfBm62ndJ9aNzFZa0EmN+P JPpoKCfPb3DFuzHvrV+DK+Se4Xsl1vux644TxxRPsMIn+XKOYQW/cHWCHiacSMy0fXmN XNA6lbPhbuWk20B8OqZ6JVo6TMShkJwSKVjVzzXuuxYOrg02KFx/3Y8kjLDVLejOIegh n7rg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=hls1iYF1; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t37si7126187pgl.543.2022.01.21.11.04.12; Fri, 21 Jan 2022 11:04:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=hls1iYF1; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239791AbiASJb0 (ORCPT + 99 others); Wed, 19 Jan 2022 04:31:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234544AbiASJbY (ORCPT ); Wed, 19 Jan 2022 04:31:24 -0500 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76E5CC061574 for ; Wed, 19 Jan 2022 01:31:23 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 2898ACE1C49 for ; Wed, 19 Jan 2022 09:31:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3AEC0C004E1; Wed, 19 Jan 2022 09:31:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1642584679; bh=L2q8bGntulfdCDWGccXI5D0JvkDEk1cC3vPfOr9fYpA=; h=From:To:Cc:Subject:Date:From; b=hls1iYF14LXQcfzfqJhHkAzg92H03fNLJpkZGuLqJSoWdYsxjT0wQaUoULSyIGBMC jiGQfHanhgMhoCh0SiuAFN9sQ85hZPaL1WPkvv6jwI12jipwQaW+5s6GRXHoEWAH1P GxWuNi/u9baog7sAbEooK046d2ILMu4cRws8x1Ab5C5dK0qMy5x4FZ/DVnuOPNboXy cVHxK37fvY5Qy/KnZoRKjGTWN2RSHj5zo0dP/5zkfgZg/BK6QjB9xpTSe4FBUWw1zW LlNqmoeOOTnVDCNlFPXVPXB2xFnimTZjHWrSiPQBFgmovmMRceBSzJaf1I5BJbfLe3 RKig1s7zC+e1g== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, arnd@arndb.de, Ard Biesheuvel Subject: [PATCH] crypto: memneq: avoid implicit unaligned accesses Date: Wed, 19 Jan 2022 10:31:09 +0100 Message-Id: <20220119093109.1567314-1-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3133; h=from:subject; bh=L2q8bGntulfdCDWGccXI5D0JvkDEk1cC3vPfOr9fYpA=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh59pcB5eSl79xvFUSF+AuHsu53O4ancFJK7p0Z3O9 63XgjhmJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYefaXAAKCRDDTyI5ktmPJFJZDA C5/NvH58dh2fXfjrRbiC4Y8ju4vnjD1Fm/VKdGofdIo2ntgV+uPVYCBaf7Ui+WDtDs6GP+o9nTjW9p hh2HbBAGy7kIebLacvb5JYGsrkZuOIML3dbqmv6w1pUJ7qf4XtaIgqmcNoRHsr9bR/yJrOQEt3+tc9 b24Sno3pAy+NN/Rm222pVX2dvLjWlimv+jk5cOwQqQWCRiO7l2IFoR5BswQFBUckHIaX7zK4NessJr s/ELtAlLGBzUiX3sSeiSQztqd9Z4vwOaFUExDtgvQg0lWCWb5cKakHDJV486nYV2rZpZNIiTo4qfg6 mPHg0BJLMWOBx8EfzSCeKBAeRWMnwSgwGW2wyZn4+6lZTxQJenswq92yctY7VQ34ByOp3UV6srYUaI 3AKWWppxuRSbspw8qVEywRODTQT5+EMbJLEMZJnixzE8LtNy2Z9qVDZpUdCOgTOHPPwvSs74W/p6D2 FIFy71Bfpi0Zy9tloS4uMphh040QaIUNR3pW9IH9ot7FU= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The C standard does not support dereferencing pointers that are not aligned with respect to the pointed-to type, and doing so is technically undefined behavior, even if the underlying hardware supports it. This means that conditionally dereferencing such pointers based on whether CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y is not the right thing to do, and actually results in alignment faults on ARM, which are fixed up on a slow path. Instead, we should use the unaligned accessors in such cases: on architectures that don't care about alignment, they will result in identical codegen whereas, e.g., codegen on ARM will avoid doubleword loads and stores but use ordinary ones, which are able to tolerate misalignment. Link: https://lore.kernel.org/linux-crypto/CAHk-=wiKkdYLY0bv+nXrcJz3NH9mAqPAafX7PpW5EwVtxsEu7Q@mail.gmail.com/ Signed-off-by: Ard Biesheuvel --- crypto/memneq.c | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/crypto/memneq.c b/crypto/memneq.c index afed1bd16aee..fb11608b1ec1 100644 --- a/crypto/memneq.c +++ b/crypto/memneq.c @@ -60,6 +60,7 @@ */ #include +#include #ifndef __HAVE_ARCH_CRYPTO_MEMNEQ @@ -71,7 +72,8 @@ __crypto_memneq_generic(const void *a, const void *b, size_t size) #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) while (size >= sizeof(unsigned long)) { - neq |= *(unsigned long *)a ^ *(unsigned long *)b; + neq |= get_unaligned((unsigned long *)a) ^ + get_unaligned((unsigned long *)b); OPTIMIZER_HIDE_VAR(neq); a += sizeof(unsigned long); b += sizeof(unsigned long); @@ -95,18 +97,24 @@ static inline unsigned long __crypto_memneq_16(const void *a, const void *b) #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS if (sizeof(unsigned long) == 8) { - neq |= *(unsigned long *)(a) ^ *(unsigned long *)(b); + neq |= get_unaligned((unsigned long *)a) ^ + get_unaligned((unsigned long *)b); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8); + neq |= get_unaligned((unsigned long *)(a + 8)) ^ + get_unaligned((unsigned long *)(b + 8)); OPTIMIZER_HIDE_VAR(neq); } else if (sizeof(unsigned int) == 4) { - neq |= *(unsigned int *)(a) ^ *(unsigned int *)(b); + neq |= get_unaligned((unsigned int *)a) ^ + get_unaligned((unsigned int *)b); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned int *)(a+4) ^ *(unsigned int *)(b+4); + neq |= get_unaligned((unsigned int *)(a + 4)) ^ + get_unaligned((unsigned int *)(b + 4)); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned int *)(a+8) ^ *(unsigned int *)(b+8); + neq |= get_unaligned((unsigned int *)(a + 8)) ^ + get_unaligned((unsigned int *)(b + 8)); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12); + neq |= get_unaligned((unsigned int *)(a + 12)) ^ + get_unaligned((unsigned int *)(b + 12)); OPTIMIZER_HIDE_VAR(neq); } else #endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */ -- 2.30.2