Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp10318876rwb; Fri, 25 Nov 2022 03:34:48 -0800 (PST) X-Google-Smtp-Source: AA0mqf5H6Q9skVV5vAKP8C0/pWZ5mr6ggB6gjNrA/QIuudhWbVeESifachTQRSJdLO0sN5e0KHrm X-Received: by 2002:a05:6402:707:b0:467:6035:285c with SMTP id w7-20020a056402070700b004676035285cmr34418203edx.386.1669376088625; Fri, 25 Nov 2022 03:34:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669376088; cv=none; d=google.com; s=arc-20160816; b=SZFZbDs4a2cbJdy6XR/YneuqK4LkGgH3l8DiKzZ3KnJNRz4hfIN1jRCBMptQTHpulb +yzV4OoQYm1H6Yj1+CeB6f6VyPoBYSpj7YQsaSDV5/BNRvSYiMKsniuc8rJOxAz5iw2C OmAdoeh6vAldwHbgeh8C44BAsPAljTX9klfxjEHfloYOsCiSy3fA/UGC5iEod/dj3Uds eLkAlcPZq/D71ZOpiNH6WxNzM1vFY6kFe16rnD/nswz8iCnKdQJeLcqCo9aCI+i4UKQ3 D21tlnvaFVdAWyAywf8s1tv6tNWsL6iZL+Ajdj52gwbltfEvfmmkNHfaQkka1mhNP/sv NMVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=Gy3ynE+iWXuqYXQMsuj6fo8+nMLRPGdXWfmT8cxJ9rc=; b=m9mOOmtn2VAb+7wN4eEXyGpMbMlJgIkLcajTb2kVcwwWZ/b/56bRIXMuy5Xv2Mlz3Y bNyRmlLlyZnhd1vhDiK5Rp9ITKroPVwJYsLcd4s55oYCeC9KT/DUgw9CAo8fdCFl6CGp xs2gWuGvReKIJWSyiRRDxPuicHWpASL5CqKhHuGOwfDQBeJgU5rCQ2v9FEhgYqMGoNaR dLweMqNEcdHRrXZSuPWfPnBmUVBzsVZ3kAF5IlkL2XVImMlAgPG8F+9sm7mCbgin78NL /Ev+GP6MXiXgtLj8iQy1RucuAFsqJMZrkbd/i9cPo/cQ2xa+6bsuFGP871xY3mO8Qmxi THBg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hv11-20020a17090760cb00b0078adad5930esi2807092ejc.255.2022.11.25.03.34.17; Fri, 25 Nov 2022 03:34:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229669AbiKYLcE (ORCPT + 99 others); Fri, 25 Nov 2022 06:32:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229541AbiKYLcD (ORCPT ); Fri, 25 Nov 2022 06:32:03 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 147761902F; Fri, 25 Nov 2022 03:32:03 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AED4F623A5; Fri, 25 Nov 2022 11:32:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7B1DC433D6; Fri, 25 Nov 2022 11:31:59 +0000 (UTC) Date: Fri, 25 Nov 2022 11:31:56 +0000 From: Catalin Marinas To: Herbert Xu Cc: Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Linux Crypto Mailing List Subject: Re: [v2 PATCH 2/9] crypto: api - Add crypto_tfm_ctx_dma Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-6.7 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Hi Herbert, Thanks for putting this together. I'll try to go through the series but my crypto knowledge is fairly limited. On Fri, Nov 25, 2022 at 12:36:31PM +0800, Herbert Xu wrote: > diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h > index f50c5d1725da..4c99eb66e654 100644 > --- a/include/crypto/algapi.h > +++ b/include/crypto/algapi.h > @@ -7,6 +7,7 @@ > #ifndef _CRYPTO_ALGAPI_H > #define _CRYPTO_ALGAPI_H > > +#include > #include > #include > #include > @@ -25,6 +26,14 @@ > #define MAX_CIPHER_BLOCKSIZE 16 > #define MAX_CIPHER_ALIGNMASK 15 > > +#ifdef ARCH_DMA_MINALIGN > +#define CRYPTO_DMA_ALIGN ARCH_DMA_MINALIGN > +#else > +#define CRYPTO_DMA_ALIGN CRYPTO_MINALIGN > +#endif > + > +#define CRYPTO_DMA_PADDING ((CRYPTO_DMA_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1)) Is the CRYPTO_DMA_PADDING used anywhere? I couldn't find it in this series and I'd rather drop it, together with CRYPTO_DMA_ALIGN (see below). > + > struct crypto_aead; > struct crypto_instance; > struct module; > @@ -189,10 +198,38 @@ static inline void crypto_xor_cpy(u8 *dst, const u8 *src1, const u8 *src2, > } > } > > +static inline void *crypto_tfm_ctx(struct crypto_tfm *tfm) > +{ > + return tfm->__crt_ctx; > +} > + > +static inline void *crypto_tfm_ctx_align(struct crypto_tfm *tfm, > + unsigned int align) > +{ > + if (align <= crypto_tfm_ctx_alignment()) > + align = 1; > + > + return PTR_ALIGN(crypto_tfm_ctx(tfm), align); > +} > + > static inline void *crypto_tfm_ctx_aligned(struct crypto_tfm *tfm) > { > - return PTR_ALIGN(crypto_tfm_ctx(tfm), > - crypto_tfm_alg_alignmask(tfm) + 1); > + return crypto_tfm_ctx_align(tfm, crypto_tfm_alg_alignmask(tfm) + 1); >+} I had an attempt to make crypto_tfm_alg_alignmask() the larger of the cra_alignmask and ARCH_DMA_MINALIGN but for some reason the kernel started to panic, so I gave up. > + > +static inline unsigned int crypto_dma_align(void) > +{ > + return CRYPTO_DMA_ALIGN; > +} We have a generic dma_get_cache_alignment() function which currently is either 1 or ARCH_DMA_MINALIGN, if the latter is defined. My plan is to make eventually make this dynamic based on the actual cache line size (on most arm64 systems it would be 64 rather than 128). So could you use this instead of defining a CRYPTO_DMA_ALIGN? The only difference would be that dma_get_cache_alignment() returns 1 rather than ARCH_KMALLOC_MINALIGN if ARCH_DMA_MINALIGN is not defined, but I don't think that's an issue. > + > +static inline unsigned int crypto_dma_padding(void) > +{ > + return (crypto_dma_align() - 1) & ~(crypto_tfm_ctx_alignment() - 1); > +} > + > +static inline void *crypto_tfm_ctx_dma(struct crypto_tfm *tfm) > +{ > + return crypto_tfm_ctx_align(tfm, crypto_dma_align()); > } These would need to cope with crypto_dma_align() < ARCH_KMALLOC_MINALIGN. I think that's fine, the padding will be 0 if crypto_dma_align() is 1. -- Catalin