Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp749832pxb; Fri, 22 Apr 2022 10:20:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw3F+jqKxhsEnt3MZ+g4Up4LeXOzxcTE8npO/Gl6TdY74VmoGMCBGrLzJAKQaEMKpJpqzE5 X-Received: by 2002:a17:902:b586:b0:159:684:c522 with SMTP id a6-20020a170902b58600b001590684c522mr5565038pls.39.1650648033651; Fri, 22 Apr 2022 10:20:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1650648033; cv=none; d=google.com; s=arc-20160816; b=cNDMuSRNEri36H4hd/Wqtsfx6i9jOM4g2RSJtl5Hys46rSgaSahFQOR+yOqwPY1bgx 4lBdtEzhd2uhTeARCBtcbRSw0YKRepuYOW3gb1BYoKgoaGU0juaNepej2yHh/UsW/M+1 PE3uuNUd9I/sIYbYLSE58a6HEhsIR9BuzrZqtuhWTYyNSC1Sl9A1+fUkM+npQ674lyE4 bL1HQvvI2sPydrtWkMV5H3PcL5/CxGtPEhiSNTnSAu+Gki5V3NZah9GtZqG78NvMgdhq B35Q+IDIdQgomHSroWgDWvvyIh0S1MMT8Pkmtm2Y2R/tOgmkUZV78CE14BnVSKrwxs9c DkuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=0/PW2+E6u7bbO07M1jYyGWWEQHNdBHNudl2z9/KBcFU=; b=Lg8zXC0guq7HfTtMzdvdxngrz3a4kH7Lli+v4EDbNp8rHepLn/K0Mz6PFnIOpjnuAl ePWHYtUq6c44QB1KYm9PN3rNxVbUZOSPHvOzoBUQ4+zs+e3GyPhW94cbeygxFfIJ0R9M cIU9cNtpDuwPHsnwPh4G4uazd0W0AphDYeRBGpTX0AFATv1w47vTDRZCQFdLQDMwaozP 6zO6p8Gsq1z0oDq/0TFn37e0t6zi/TJPLA3PPvjR4LT8HFoVfJ+1OMwQGz9iwWq32nRI ZLznSCjbSCb7WJ2M4JqsXKlizJ03kygpxXvGSiWYXKjAW2h4BBR0fMssqVriawFUJNna JkHQ== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id bq23-20020a056a000e1700b004fa843103dbsi7562802pfb.193.2022.04.22.10.20.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 10:20:33 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CC74E8594D; Fri, 22 Apr 2022 10:15:54 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351079AbiDUN2X (ORCPT + 99 others); Thu, 21 Apr 2022 09:28:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348045AbiDUN2U (ORCPT ); Thu, 21 Apr 2022 09:28:20 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF4C0E016 for ; Thu, 21 Apr 2022 06:25:30 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8CCDEB82488 for ; Thu, 21 Apr 2022 13:25:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D6746C385A5; Thu, 21 Apr 2022 13:25:25 +0000 (UTC) Date: Thu, 21 Apr 2022 14:25:22 +0100 From: Catalin Marinas To: Arnd Bergmann Cc: Christoph Hellwig , Ard Biesheuvel , Herbert Xu , Will Deacon , Marc Zyngier , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" Subject: Re: [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 21, 2022 at 02:28:45PM +0200, Arnd Bergmann wrote: > On Thu, Apr 21, 2022 at 1:06 PM Catalin Marinas wrote: > > On Thu, Apr 21, 2022 at 12:20:22AM -0700, Christoph Hellwig wrote: > > > Btw, there is another option: Most real systems already require having > > > swiotlb to bounce buffer in some cases. We could simply force bounce > > > buffering in the dma mapping code for too small or not properly aligned > > > transfers and just decrease the dma alignment. > > > > We can force bounce if size is small but checking the alignment is > > trickier. Normally the beginning of the buffer is aligned but the end is > > at some sizeof() distance. We need to know whether the end is in a > > kmalloc-128 cache and that requires reaching out to the slab internals. > > That's doable and not expensive but it needs to be done for every small > > size getting to the DMA API, something like (for mm/slub.c): > > > > folio = virt_to_folio(x); > > slab = folio_slab(folio); > > if (slab->slab_cache->align < ARCH_DMA_MINALIGN) > > ... bounce ... > > > > (and a bit different for mm/slab.c) > > I think the decision to bounce or not can be based on the actual > cache line size at runtime, so most commonly 64 bytes on arm64, > even though the compile-time limit is 128 bytes. > > We also know that larger slabs are all cacheline aligned, so simply > comparing the transfer size is enough to rule out most, in this case > any transfer larger than 96 bytes must come from the kmalloc-128 > or larger cache, so that works like before. There's also the case with 128-byte cache lines and kmalloc-192. > For transfers <=96 bytes, the possibilities are: > > 1.kmalloc-32 or smaller, always needs to bounce > 2. kmalloc-96, but at least one byte in partial cache line, > need to bounce > 3. kmalloc-64, may skip the bounce. > 4. kmalloc-128 or larger, or not a slab cache but a partial > transfer, may skip the bounce. > > I would guess that the first case is the most common here, > so unless bouncing one or two cache lines is extremely > expensive, I don't expect it to be worth optimizing for the latter > two cases. I think so. If someone complains of a performance regression, we can look at optimising the bounce. I have a suspicion the cost of copying two cache lines is small compared to swiotlb_find_slots() etc. -- Catalin