Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp322232pxb; Fri, 22 Apr 2022 01:21:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwRNeAx7/0nFw7HCjFrDMFShk3irzN2tDkqdkjjclK6qdsvjzhv88t4Xe/x5A5pw16AEXrg X-Received: by 2002:a17:902:ead4:b0:158:f6ff:86a9 with SMTP id p20-20020a170902ead400b00158f6ff86a9mr3451767pld.28.1650615680324; Fri, 22 Apr 2022 01:21:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1650615680; cv=none; d=google.com; s=arc-20160816; b=ODXZvzY5NIUyuPCBYMmVRacEfXE3eLwMDoW3ituaZ4LANYGVhucrW99K66vepd9WXL uqalWiyxwKP80xoxq0KV3cxIlsYJl3bte1sa1dBGa7gSYeu61c/MscoFz5bHm1Reaj4V Q6/VZt/EQbT6rUyEknRkvr5diN/wQaXBocr4J4E+Rybb+hR5hospATQb3npOmxx6X4oy pQE9vy7pUYZxAjVLE06NsNiKxPvCFpzS2DAsI5d99kABYKWSPRN0vA4iP2BTpZjn3JnF LDc/Bab+szr0Qf88Qe4dOWBJYH4nbM72mrJA7s/GQdHIW7hRbcVE0lKhCVe+Lu43+2Zj w44w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=+6wbpgxPS5x7JCar49aMkcyGEI5DfeYBhii4vfsiOzM=; b=jz3xG8ghsvsyWTmIEh2XEg/l04HizR0coRgOGTFcVub1jNb3jBB32o7+gonPi32Ezq obAOkxgUjgu1LrT6HCwdNQqXxoQpXRJlNiq0ZOtaGLW+0zdSjeafiA6F3FG1eXFeGThh dkNCLh9jhFYYHm6jlBnYfnll//rAxUvigKpbLpQ/vp1V3rwOIXPq+QdbZHuAFHe9rzMS b4cYQxixgGvHE18Oc9nACpflUS1n5MhK1wpGhLnchRzVRVpc9dM99gduKj0GDWaK7ynV Y5qqFHFxRrebIHf6PsbASekq3nOHu7CCJpBeOhqWNCaCF9GUGPVNJDEDAUTuWNTQZFI4 D4Zw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e20-20020a63f554000000b003aa89180742si4743886pgk.99.2022.04.22.01.21.07; Fri, 22 Apr 2022 01:21:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1389587AbiDUOrN (ORCPT + 99 others); Thu, 21 Apr 2022 10:47:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1389579AbiDUOrH (ORCPT ); Thu, 21 Apr 2022 10:47:07 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A27341F8B for ; Thu, 21 Apr 2022 07:44:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 46157B825B6 for ; Thu, 21 Apr 2022 14:44:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7EB93C385A5; Thu, 21 Apr 2022 14:44:12 +0000 (UTC) Date: Thu, 21 Apr 2022 15:44:08 +0100 From: Catalin Marinas To: Arnd Bergmann Cc: Christoph Hellwig , Ard Biesheuvel , Herbert Xu , Will Deacon , Marc Zyngier , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" Subject: Re: [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-6.7 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 21, 2022 at 03:47:30PM +0200, Arnd Bergmann wrote: > On Thu, Apr 21, 2022 at 3:25 PM Catalin Marinas wrote: > > On Thu, Apr 21, 2022 at 02:28:45PM +0200, Arnd Bergmann wrote: > > > We also know that larger slabs are all cacheline aligned, so simply > > > comparing the transfer size is enough to rule out most, in this case > > > any transfer larger than 96 bytes must come from the kmalloc-128 > > > or larger cache, so that works like before. > > > > There's also the case with 128-byte cache lines and kmalloc-192. > > Sure, but that's much less common, as the few machines with 128 byte > cache lines tend to also have cache coherent devices IIRC, so we'd > skip the bounce buffer entirely. Do you know which machines still have 128-byte cache lines _and_ non-coherent DMA? If there isn't any that matters, I'd reduce ARCH_DMA_MINALIGN to 64 now (while trying to get to even smaller kmalloc caches). > > > For transfers <=96 bytes, the possibilities are: > > > > > > 1.kmalloc-32 or smaller, always needs to bounce > > > 2. kmalloc-96, but at least one byte in partial cache line, > > > need to bounce > > > 3. kmalloc-64, may skip the bounce. > > > 4. kmalloc-128 or larger, or not a slab cache but a partial > > > transfer, may skip the bounce. > > > > > > I would guess that the first case is the most common here, > > > so unless bouncing one or two cache lines is extremely > > > expensive, I don't expect it to be worth optimizing for the latter > > > two cases. > > > > I think so. If someone complains of a performance regression, we can > > look at optimising the bounce. I have a suspicion the cost of copying > > two cache lines is small compared to swiotlb_find_slots() etc. > > That is possible, and we'd definitely have to watch out for > performance regressions, I'm just skeptical that the cases that > suffer from the extra bouncer buffering on 33..64 byte allocations > benefit much from having a special case if the 1...32 and 65..96 > byte allocations are still slow. > > Another simpler way to do this might be to just not create the > kmalloc-96 (or kmalloc-192) caches, and assuming that any > transfer >=33 (or 65) bytes is safe. I'll give the dma bounce idea a go next week, see how it looks. -- Catalin