Received: by 2002:a05:6358:1087:b0:cb:c9d3:cd90 with SMTP id j7csp1196240rwi; Thu, 13 Oct 2022 10:13:51 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4vFDQ+lj5TJFE+uXbJX+EX+8h6ulCpnsU8q6WkUcaLd6qZnGNujO8DZ/yn0FTO/VwZRGUx X-Received: by 2002:a17:906:8474:b0:78d:ce9c:284e with SMTP id hx20-20020a170906847400b0078dce9c284emr581098ejc.702.1665681231027; Thu, 13 Oct 2022 10:13:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665681231; cv=none; d=google.com; s=arc-20160816; b=wDV0rkCUdegzQbN27z0o8vm578Al1WZMOAA2O9xU43yQFD7BOBzSqrOBkjqWR5u3WE OPrFsYe4EPW+LrYAvIeGcNZwSKyiuIDoic643rn2ThbVNH+Z7fHjLJ/NQi1teJJBj/qI 7Zmm7qCKe1HvlT81mnD9aXLWrsCPKyY4SJ4KuRp7AKu3Mc21Fxtw/Frp5u32zIm2V4nw WuUROhEqU9HtIDvy16xPXocTigOQOMJ4sLjcIZmwOJR18WD2wZL233IqxIW9wua1Ptgp jDJwezl5VVffI0OBZPFzieKCrlojuzVKTD86UruKuyYkpHCjqV+aBI+6BjdFI4qjtrcK qmSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=UhICjJGuy5ivRglRCtwAvB9Ha21GebQ0ZP8eNSNGBBA=; b=oZvbauWfwbzp2RUvEXOvZILInfDfGtujeWxE5spzmU2OGSRlG5ycR7Is5mgYmL+iF+ EBtYIWrnQYjz/fRYtt3lQSDwYjyYObm78rt3XYylpQoAKPfKCvco07HhiObQ+lA439zP TvIo70m1pstgEbEmQ8zHiWaPk78Aotept5gMx2tvZktBsofuOya8QX7u4UnZr5XCElYu OI/3m4x2rx1HBK0pn6fGF/U3nX60e/3LNsrXeZaOGtj1hmapY4ms8IJlhFTCYtMQ2Fw7 /sGTeE4nCesTUzghgNZh1w+OmhTafPNo0Blt/PefmiXhR7j68Vp8QUFfO0LgzzL5cbaQ NUGQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o9-20020a170906974900b0078e11b38990si162657ejy.880.2022.10.13.10.13.23; Thu, 13 Oct 2022 10:13:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229734AbiJMQ5n (ORCPT + 99 others); Thu, 13 Oct 2022 12:57:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229748AbiJMQ5l (ORCPT ); Thu, 13 Oct 2022 12:57:41 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C192FC5 for ; Thu, 13 Oct 2022 09:57:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3B468B81FBD for ; Thu, 13 Oct 2022 16:57:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C30FEC433D6; Thu, 13 Oct 2022 16:57:29 +0000 (UTC) Date: Thu, 13 Oct 2022 17:57:26 +0100 From: Catalin Marinas To: Isaac Manjarres Cc: Herbert Xu , Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Saravana Kannan , kernel-team@android.com Subject: Re: [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-6.7 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Oct 12, 2022 at 10:45:45AM -0700, Isaac Manjarres wrote: > On Fri, Sep 30, 2022 at 07:32:50PM +0100, Catalin Marinas wrote: > > I started refreshing the series but I got stuck on having to do bouncing > > for small buffers even if when they go through the iommu (and I don't > > have the set up to test it yet). > > For devices that go through the IOMMU, are you planning on adding > similar logic as you did in the direct-DMA path to bounce the buffer > prior to calling into whatever DMA ops are registered for the device? Yes. > Also, there are devices with ARM64 CPUs that disable SWIOTLB usage because > none of the peripherals that they engage in DMA with need bounce buffering, > and also to reclaim the default 64 MB of memory that SWIOTLB uses. With > this approach, SWIOTLB usage will become mandatory if those devices need > to perform non-coherent DMA transactions that may not necessarily be DMA > aligned (e.g. small buffers), correct? Correct. I've been thinking about this and a way around is to combine the original series (dynamic kmalloc_minalign) with the new one so that the arch code can lower the minimum alignment either to 8 if swiotlb is available (usually in server space with more RAM) or the cache line size if there is no bounce buffer. > If so, would there be concerns that the memory savings we get back from > reducing the memory footprint of kmalloc might be defeated by how much > memory is needed for bounce buffering? It's not necessarily about the saved memory but also locality of the small buffer allocations, less cache and TLB pressure. > I understand that we can use the > "swiotlb=num_slabs" command line parameter to minimize the amount of > memory allocated for bounce buffering. If this is the only way to > minimize this impact, how much memory would you recommend to allocate > for bounce buffering on a system that will only use bounce buffers for > non-DMA-aligned buffers? It's hard to tell, it would need to be guessed by trial and error on specific hardware if you want to lower it. Another issue is that IIRC the swiotlb is allocated in 2K slots, so you may need a lot more bounce buffers than the actual memory allocated. I wonder whether swiotlb is actually the best option for bouncing unaligned buffers. We could use something like mempool_alloc() instead if we stick to small buffers rather than any (even large) buffer that's not aligned to a cache line. Or just go for kmem_cache_alloc() directly. A downside is that we may need GFP_ATOMIC for such allocations, so higher risk of failure. -- Catalin