Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp3218858rdh; Mon, 27 Nov 2023 08:51:36 -0800 (PST) X-Google-Smtp-Source: AGHT+IEYRaHQNyULET4vBweke3CwU89woVfLJoLpZSpyp0VqT/0rhMnE62SBPpbrICgHyCgV7Flj X-Received: by 2002:a05:6a21:190:b0:18b:9041:5729 with SMTP id le16-20020a056a21019000b0018b90415729mr16420140pzb.17.1701103896036; Mon, 27 Nov 2023 08:51:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701103896; cv=none; d=google.com; s=arc-20160816; b=lD99IQUNFT2FNfS/1qXRu2SA1EMNn2IdJQeXFuOaDDKRX4ghLY7LJs8SAJun+T002D jqYd63w8g9aW285adF7MhN+3mMcA1rBqmj7NFsjdGjz26TkKk7Tf/qIL7LfDashzHazL FB1F0ziZ1VWRixdm5PpPq+lYxzaILO5eCCFDTXfW1Rc+Eo3hlgI8FvlbzJAPXu3zaojK ahzW7DaUYjtVOXzkDCvNsV/WbNSm4tyX4P/IFToWYZ/9GP0P2+sPx+jrQrKsxuhRVZRA +jGzL3ut9kSELBtZtB2f89F8C8mnlEA0+prAkTYRItIpY49J60y1BOMajYQnRRtQykQa V0YA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=rdflR5aXwwlH0yZvTPTVutdTKgoZ/xOEyK5QhMMiRx8=; fh=yFGxgClLs6+l0MM0FTjIFEG/JWs5wzGiHyeFU2cCF84=; b=Kq5J5WhH9Y/3qiQhysHgQVlidNgsdwBPgEku74wJaFUkgWYG9zqU9THHMc7d1QOJ+Y Bqbq0ynCTln17gaMVUFXOQp1GimuFQxjiK06u8QbvV6bDQKGYjc6Iq6WZl87F/12lQ+v SfoQwRXJl9va88j/he5Te7BWIPiGrNLS6elVykWqOwCwn429h3OaMywUkd1qTlaeVcfH gXuuGtxerDFJnD9vh+5UHCKhOQB+QupTbENkheInGbPig4bfystOUtXNW06FBqBzX7RJ P4t4Wys/GM8gk7qyV2FnFcCkvCPwUOFoqptOABmugi2WazAVF1A9Qyc83NgB4kw/sgpd OHGg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=harvard.edu Return-Path: Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id q4-20020a631f44000000b005b403446f1dsi9915000pgm.129.2023.11.27.08.51.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 08:51:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=harvard.edu Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 648C180947EF; Mon, 27 Nov 2023 08:51:33 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234223AbjK0QvR (ORCPT + 99 others); Mon, 27 Nov 2023 11:51:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234224AbjK0QvQ (ORCPT ); Mon, 27 Nov 2023 11:51:16 -0500 Received: from netrider.rowland.org (netrider.rowland.org [192.131.102.5]) by lindbergh.monkeyblade.net (Postfix) with SMTP id 2105F1AB for ; Mon, 27 Nov 2023 08:51:22 -0800 (PST) Received: (qmail 131349 invoked by uid 1000); 27 Nov 2023 11:51:21 -0500 Date: Mon, 27 Nov 2023 11:51:21 -0500 From: Alan Stern To: Christoph Hellwig Cc: Hamza Mahfooz , Dan Williams , Marek Szyprowski , Andrew , Ferry Toth , Andy Shevchenko , Thorsten Leemhuis , iommu@lists.linux.dev, Kernel development list , USB mailing list Subject: Re: Bug in add_dma_entry()'s debugging code Message-ID: <637d6dff-de56-4815-a15a-1afccde073f0@rowland.harvard.edu> References: <736e584f-7d5f-41aa-a382-2f4881ba747f@rowland.harvard.edu> <20231127160759.GA1668@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231127160759.GA1668@lst.de> X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Mon, 27 Nov 2023 08:51:33 -0800 (PST) On Mon, Nov 27, 2023 at 05:07:59PM +0100, Christoph Hellwig wrote: > On Mon, Nov 27, 2023 at 11:02:20AM -0500, Alan Stern wrote: > > All it looks for is mappings that start on the same cache line. However > > on architectures that have cache-coherent DMA (such as x86), touching > > the same cache line does not mean that two DMA mappings will interfere > > with each other. To truly overlap, they would have to touch the same > > _bytes_. > > But that is a special case that does not matter. Linux drivers need > to be written in a portable way, and that means we have to care about > platforms that are not DMA coherent. The buffers in the bug report were allocated by kmalloc(). Doesn't kmalloc() guarantee that on architectures with non-cache-coherent DMA, allocations will be aligned on cache-line boundaries (or larger)? Isn't this what ARCH_DMA_MINALIGN and ARCH_KMALLOC_MINALIGN are supposed to take care of in include/linux/slab.h? > > How should this be fixed? Since the check done in add_dma_entry() is > > completely invalid for x86 and similar architectures, should it simply > > be removed for them? Or should the check be enhanced to look for > > byte-granularity overlap? > > The patch is every but "completely invalid". It points out that you > violate the Linux DMA API requirements. Since when does the DMA API require that mappings on x86 must be to separate cache lines? Is this documented anywhere? For that matter, Documentation/core-api/dma-api-howto.rst explicitly says: If you acquired your memory via the page allocator (i.e. __get_free_page*()) or the generic memory allocators (i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from that memory using the addresses returned from those routines. It also says: Architectures must ensure that kmalloc'ed buffer is DMA-safe. Drivers and subsystems depend on it. If an architecture isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in the CPU cache is identical to data in main memory), ARCH_DMA_MINALIGN must be set so that the memory allocator makes sure that kmalloc'ed buffer doesn't share a cache line with the others. See arch/arm/include/asm/cache.h as an example. It says nothing about avoiding more than one DMA operation at a time to prevent overlap. Is the documentation wrong? > This might not have an > effect on the particular plaform you are currently running on, but it > is still wrong. Who decides what is right and what is wrong in this area? Clearly you have a different opinion from David S. Miller, Richard Henderson, and Jakub Jelinek (the authors of that documentation file). > Note that there have been various mumblings about > using nosnoop dma on x86, in which case you'll have the issue there > as well. Unless the people who implement nosnoop DMA also modify kmalloc() or ARCH_DMA_MINALIGN. I guess the real question boils down to where the responsiblity should lie. Should kmalloc() guarantee that the memory regions it provides will always be suitable for DMA, with no overlap issues? Or should all the innumerable users of kmalloc() be responsible for jumping through hoops to request arch-specific minimum alignment for their DMA buffers? Alan Stern