Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp2366959pxk; Mon, 14 Sep 2020 11:20:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyaUhfShILJPQ/u7mmKh1W3xhc60m+qOjpKw1ZhEadddJWuVp2HIjx327MYb/F0s1JjeH0j X-Received: by 2002:a05:6402:1386:: with SMTP id b6mr18487941edv.296.1600107658591; Mon, 14 Sep 2020 11:20:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600107658; cv=none; d=google.com; s=arc-20160816; b=ddx2KkcovXvngI1xYSvQWwkFG+5Y6JVN3JfDdOMdrQmI8SEzBQd6WI6yyZD+52VnBK F99MbiFZBqJGGz3+18Ez446M3IUIta4vl2BTFzt8mgCWlo80OuxiNzUILEm95dH2+wyF tSOyPjCQiq/bq6EcxSG43YL28nGte0eIxXj7wqRS88Pz9hHPc0h3pp23DOVxF/NAA3oq 2FBtdFfixm1RgKhAZewb7dZ0H725QKxMmiK3XOi/Ejcn7LPXwSEkbKz/PY59v9E+XLrz dW24bRV6VtW2utUgjOTiYUcraksA6JTx2z5rIsnKt4MlDgevXxg7f6bCxKSnFHqcyoFH A3HQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=IiWBHF7Kq1gGCefdLYFTYRZidx/gNkc82SqKKeeSLrE=; b=LtRcbg+bRoBo50Md+Qeyy3sNqVgvJ4E3XGz4H4i/PKAZhWH9FQg56lwjQGnlCGTUj3 o3xtnbKf11RGSkIA9bH7vqSoVsEll3xmKa7J7mAx6fkIifp8fTi1P39H+R0uNXAA87Gh 8Ikmiu7Sex+Pe2BzoH4DKcZ6NTzWSbVOHIuAXXBVZPLIRxnhHZK2q41Xi5erbf1gd+gR /X2z1iwuRKWvI17ib/F75AmUymvdaz9Chl2AGqgMR2aDtWBIVuA+8/zhzjAsYAYHwcGI IWY3JXuyjHoZSRYsVMoK/xL22zOVLXIOFJugFaN5/2BeB/lfNTZvsKmz4MmZA02+Dhh/ W90A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=mWz2Bdug; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h1si7558444ejg.341.2020.09.14.11.20.28; Mon, 14 Sep 2020 11:20:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=mWz2Bdug; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726023AbgINSUQ (ORCPT + 99 others); Mon, 14 Sep 2020 14:20:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:44772 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725978AbgINSUN (ORCPT ); Mon, 14 Sep 2020 14:20:13 -0400 Received: from mail-oi1-f174.google.com (mail-oi1-f174.google.com [209.85.167.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 39306214F1; Mon, 14 Sep 2020 18:20:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600107613; bh=IiWBHF7Kq1gGCefdLYFTYRZidx/gNkc82SqKKeeSLrE=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=mWz2BdugR29K6X0Xu8C6SlkOLSe2pjJNYMFlbO+5ax3K2AVcWjnRt44zquff6bMPR 2Q38a3C4gXui8Qjs0kMiRNMPXSsHXx4rfFjWie+6XEumnVmhP7B1CbuwD1H30pdahk 2IUwLt/RWlJeeCqbfJUqZJ6R5pdUEYHjy7i+m0o4= Received: by mail-oi1-f174.google.com with SMTP id w16so896688oia.2; Mon, 14 Sep 2020 11:20:13 -0700 (PDT) X-Gm-Message-State: AOAM532fHsbJzXkpa1rmTpF8S8bRfsBi1mNeXTVTDx5mb5+X5t66lm43 MGwQVQUE0ynDEBJYQhLTfICIh8uxnOE5BgUP0zk= X-Received: by 2002:aca:d845:: with SMTP id p66mr400667oig.47.1600107612611; Mon, 14 Sep 2020 11:20:12 -0700 (PDT) MIME-Version: 1.0 References: <20200806163551.14395-1-andrei.botila@oss.nxp.com> <20200806163551.14395-2-andrei.botila@oss.nxp.com> <20200821034651.GA25442@gondor.apana.org.au> <20200908221019.GA23497@gondor.apana.org.au> <67159207-1082-48be-d085-971a84b525e0@nxp.com> <38f9904b-5bf7-ea99-ed8a-27cb49f405bd@nxp.com> In-Reply-To: <38f9904b-5bf7-ea99-ed8a-27cb49f405bd@nxp.com> From: Ard Biesheuvel Date: Mon, 14 Sep 2020 21:20:00 +0300 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH RESEND 1/9] crypto: caam/jr - add fallback for XTS with more than 8B IV To: =?UTF-8?Q?Horia_Geant=C4=83?= Cc: Herbert Xu , "Andrei Botila (OSS)" , Aymen Sghaier , "David S. Miller" , "linux-crypto@vger.kernel.org" , "linux-kernel@vger.kernel.org" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Mon, 14 Sep 2020 at 20:12, Horia Geant=C4=83 wrot= e: > > On 9/14/2020 7:28 PM, Ard Biesheuvel wrote: > > On Mon, 14 Sep 2020 at 19:24, Horia Geant=C4=83 = wrote: > >> > >> On 9/9/2020 1:10 AM, Herbert Xu wrote: > >>> On Tue, Sep 08, 2020 at 01:35:04PM +0300, Horia Geant=C4=83 wrote: > >>>> > >>>>> Just go with the get_unaligned unconditionally. > >>>> > >>>> Won't this lead to sub-optimal code for ARMv7 > >>>> in case the IV is aligned? > >>> > >>> If this should be optimised in ARMv7 then that should be done > >>> in get_unaligned itself and not open-coded. > >>> > >> I am not sure what's wrong with avoiding using the unaligned accessors > >> in case data is aligned. > >> > >> Documentation/core-api/unaligned-memory-access.rst clearly states: > >> These macros work for memory accesses of any length (not just 32 bits = as > >> in the examples above). Be aware that when compared to standard access= of > >> aligned memory, using these macros to access unaligned memory can be c= ostly in > >> terms of performance. > >> > >> So IMO it makes sense to use get_unaligned() only when needed. > >> There are several cases of users doing this, e.g. siphash. > >> > > > > For ARMv7 code, using the unaligned accessors unconditionally is fine, > > and it will not affect performance. > > > > In general, when CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is defined, > > you can use the unaligned accessors. If it is not, it helps to have > > different code paths. > > > arch/arm/include/asm/unaligned.h doesn't make use of > linux/unaligned/access_ok.h, even if CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCE= SS > is set. > > I understand the comment in the file, however using get_unaligned() > unconditionally takes away the opportunity to generate optimized code > (using ldrd/ldm) when data is aligned. > But the minimal optimization that is possible here (one ldrd/ldm instruction vs two ldr instructions) is defeated by the fact that you are using a conditional branch to select between the two. And this is not even a hot path to begin with, > > This is a bit murky, and through the years, the interpretation of > > unaligned-memory-access.rst has shifted a bit, but in this case, it > > makes no sense to make the distinction. > > > > Thanks, > Horia