Received: by 2002:ab2:3141:0:b0:1ed:23cc:44d1 with SMTP id i1csp448285lqg; Fri, 1 Mar 2024 09:54:20 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWhKWcrS/aIWHVUqbbISMWJSq/jfJOeNbpfXglLQkkYug2+C6CT/XgUfS5OfRktod1RqJV3BCWqWvZuzHc+8piyLG9SFzSUVKWscu1iOw== X-Google-Smtp-Source: AGHT+IGCC7XCoQhCIqvdTPlp92UsYMIXIGeRmfU/52+nr5DflzgQl97vZdmbLgNUXWX4C0qF98Dn X-Received: by 2002:a05:6402:c94:b0:566:a235:7f14 with SMTP id cm20-20020a0564020c9400b00566a2357f14mr1787259edb.16.1709315660137; Fri, 01 Mar 2024 09:54:20 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709315660; cv=pass; d=google.com; s=arc-20160816; b=t3Nb4RsF2mUurmtaLkMlSWBlbgr+zau47VoasijLyyBfVXH6zSkabm8sqPL+hk9GPO OzLPwnVm3t/sUT5htF5y6A3POd9mkGW5Eub/lZxfNQsnWvrLKz53vmE+e9k1xkqzMKi0 Ow4J8bkyIord2uWg4OdIH3mqcOz8g7Lh8g7nuMD1jCfoF7gTaZ5C50VA/Z1CrTqy8PyI mVys9eX2RLLtSb1BYE7jnnx1DaGzm8hYMNP05o3qc1eWkKPWBAelxibWApXMmDleDLoi 1+tsc5hTOtmnbtChqR9uu49htXPjwYgEf7CAESTDIe0p1ppUZ6O+phmnUQyvXnSYgOxw RXSw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=tAgqwhPsmJ8cYj5EzwH3RDS7dnBUXSTFNOt+0PAprTM=; fh=RmFsckSjG+6N2tQPwKAhYMwbPKINZlOaC8CWt2qZxrs=; b=i0um6EzoiN+9SJlWAn2B4PkC9pz5gfJ28nNmP1sBIWa6UJMJBRiRUaeLhOL2HFR5xS VWzj8eTfgUIi569x0HFxqtXYsqcyIc47WqxuJuZjsAk2ztV8bNZ/+oCB2YRZHAhMfVaZ 9CC2DgDP9GZKi8eQJXeYJjHTHEz7eU0WP/lz9NDJFnv51HyTUJ9F+GKkAwYmNgvveKm9 cz1+2fUwAcJ5CWDtR3uVflNRJ9UoJJng1SPCMNTGLSqobekTnLvNaRCEvZeQTW+FZGVG cpL3dV1cTOEbDlIjlxWV50pOQo5JMv1bWO3n46C32Hlw4A0JUhRZg8IAB84+M/e2Dyo+ g3gw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-88871-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-88871-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id v6-20020a056402184600b00565506baea4si1576854edy.334.2024.03.01.09.54.20 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 09:54:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-88871-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-88871-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-88871-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 977391F27045 for ; Fri, 1 Mar 2024 17:54:19 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2E3D2134CB; Fri, 1 Mar 2024 17:54:12 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F038516FF5F for ; Fri, 1 Mar 2024 17:54:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709315651; cv=none; b=klbxnRgzYbQrBZQJ25nANcGeNTLUU/lF7U1yDD49oEm5jw6AkzMc0zZJ2shQv1FjPHavu8MMHY8XkUcZzOpa6w1n4aPTeC0TPOYgjBCrSZ2EaICvE3KyJjXWDKqcH9UZx2SJmLIWU+YHm+DjKRKm1DER2fVVYXt2wMnfPaUYFT4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709315651; c=relaxed/simple; bh=dhV8UcMzuIMIEciXJWv9lg+qF4UzGIqBhO24KteMsZA=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=nP+BIYF9x89dLl8B86g10L5D1kyBG1RG+zI/+kXEjlqbYMVGRZBOKC081sAXFg6dGv92cCXhpMuACOENlJzx6ZyQARxlNHXGKzygLRWVZrV8IM09VfxadOfpXVik+kaEAdhUt3geYGqmrDz2lTsQidaIdc05a9fE8uHdOoCUuQM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 558C0139F; Fri, 1 Mar 2024 09:54:47 -0800 (PST) Received: from [10.57.67.78] (unknown [10.57.67.78]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 60F573F6C4; Fri, 1 Mar 2024 09:54:07 -0800 (PST) Message-ID: <8869c8b2-29c3-41e4-8f8a-5bcf9c0d22bb@arm.com> Date: Fri, 1 Mar 2024 17:54:06 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v5 6/6] swiotlb: Remove pointless stride adjustment for allocations >= PAGE_SIZE Content-Language: en-GB To: =?UTF-8?B?UGV0ciBUZXNhxZnDrWs=?= , Christoph Hellwig Cc: Michael Kelley , Will Deacon , "linux-kernel@vger.kernel.org" , Petr Tesarik , "kernel-team@android.com" , "iommu@lists.linux.dev" , Marek Szyprowski , Dexuan Cui , Nicolin Chen References: <20240228133930.15400-1-will@kernel.org> <20240228133930.15400-7-will@kernel.org> <20240229133346.GA7177@lst.de> <20240229154756.GA10137@lst.de> <20240301163927.18358ee2@meshulam.tesarici.cz> <20240301180853.5ac20b27@meshulam.tesarici.cz> From: Robin Murphy In-Reply-To: <20240301180853.5ac20b27@meshulam.tesarici.cz> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 2024-03-01 5:08 pm, Petr Tesařík wrote: > On Fri, 1 Mar 2024 16:39:27 +0100 > Petr Tesařík wrote: > >> On Thu, 29 Feb 2024 16:47:56 +0100 >> Christoph Hellwig wrote: >> >>> On Thu, Feb 29, 2024 at 03:44:11PM +0000, Michael Kelley wrote: >>>> Any thoughts on how that historical behavior should apply if >>>> the DMA min_align_mask is non-zero, or the alloc_align_mask >>>> parameter to swiotbl_tbl_map_single() is non-zero? As currently >>>> used, alloc_align_mask is page aligned if the IOMMU granule is >>>>> = PAGE_SIZE. But a non-zero min_align_mask could mandate >>>> returning a buffer that is not page aligned. Perhaps do the >>>> historical behavior only if alloc_align_mask and min_align_mask >>>> are both zero? >>> >>> I think the driver setting min_align_mask is a clear indicator >>> that the driver requested a specific alignment and the defaults >>> don't apply. For swiotbl_tbl_map_single as used by dma-iommu >>> I'd have to tak a closer look at how it is used. >> >> I'm not sure it helps in this discussion, but let me dive into a bit >> of ancient history to understand how we ended up here. >> >> IIRC this behaviour was originally motivated by limitations of PC AT >> hardware. Intel 8237 is a 16-bit DMA controller. To make it somehow >> usable with addresses up to 16MB (yeah, the infamous DMA zone), IBM >> added a page register, but it was on a separate chip and it did not >> increment when the 8237 address rolled over back to zero. Effectively, >> the page register selected a 64K-aligned window of 64K buffers. >> Consequently, DMA buffers could not cross a 64K physical boundary. >> >> Thanks to how the buddy allocator works, the 64K-boundary constraint >> was satisfied by allocation size, and drivers took advantage of it when >> allocating device buffers. IMO software bounce buffers simply followed >> the same logic that worked for buffers allocated by the buddy allocator. >> >> OTOH min_align_mask was motivated by NVME which prescribes the value of >> a certain number of low bits in the DMA address (for simplicity assumed >> to be identical with the same bits in the physical address). >> >> The only pre-existing user of alloc_align_mask is x86 IOMMU code, and >> IIUC it is used to guarantee that unaligned transactions do not share >> the IOMMU granule with another device. This whole thing is weird, >> because swiotlb_tbl_map_single() is called like this: >> >> aligned_size = iova_align(iovad, size); >> phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size, >> iova_mask(iovad), dir, attrs); >> >> Here: >> >> * alloc_size = iova_align(iovad, size) >> * alloc_align_mask = iova_mask(iovad) >> >> Now, iova_align() rounds up its argument to a multiple of iova granule >> and iova_mask() is simply "granule - 1". This works, because granule >> size must be a power of 2, and I assume it must also be >= PAGE_SIZE. >> >> In that case, the alloc_align_mask argument is not even needed if you >> adjust the code to match documentation---the resulting buffer will be >> aligned to a granule boundary by virtue of having a size that is a >> multiple of the granule size. >> >> To sum it up: >> >> 1. min_align_mask is by far the most important constraint. Devices will >> simply stop working if it is not met. >> 2. Alignment to the smallest PAGE_SIZE order which is greater than or >> equal to the requested size has been documented, and some drivers >> may rely on it. >> 3. alloc_align_mask is a misguided fix for a bug in the above. >> >> Correct me if anything of the above is wrong. > > I thought about it some more, and I believe I know what should happen > if the first two constraints appear to be mutually exclusive. > > First, the alignment based on size does not guarantee that the resulting > physical address is aligned. In fact, the lowest IO_TLB_SHIFT bits must > be always identical to the original buffer address. > > Let's take an example request like this: > > TLB_SIZE = 0x00000800 > min_align_mask = 0x0000ffff > orig_addr = 0x....1234 > alloc_size = 0x00002800 > > Minimum alignment mask requires to keep the 1234 at the end. Allocation > size requires a buffer that is aligned to 16K. Of course, there is no > 16K-aligned slot with slot_address & 0x7ff == 0x200, but if IO_TLB_SHIFT > was 14, it would be slot_address & 0x3fff == 0 (low IO_TLB_SHIFT are > masked off). Since the SWIOTLB API does not guarantee any specific > value of IO_TLB_SHIFT, callers cannot rely on it. That means 0x1234 is a > perfectly valid bounce buffer address for this example. > > The caller may rightfully expect that the 16K granule containing the > bounce buffer is not shared with any other user. For the above case I > suggest to increase the allocation size to 0x4000 already in > swiotlb_tbl_map_single() and treat 0x1234 as the offset from the slot > address. That doesn't make sense - a caller asks to map some range of kernel addresses and they get back a corresponding range of DMA addresses; they cannot make any reasonable assumptions about DMA addresses *outside* that range. In the example above, the caller has explicitly chosen not to map the range xxx0000-xxx1234; if they expect the device to actually access bytes in the DMA range yyy0000-yyy1234, then they should have mapped the whole range starting from xxx0000 and it is their own error. SWIOTLB does not and cannot provide any memory protection itself, so there is no functional benefit to automatically over-allocating, all it will do is waste slots. iommu-dma *can* provide memory protection between individual mappings via additional layers that SWIOTLB doesn't know about, so in that case it's iommu-dma's responsibility to explicitly manage whatever over-allocation is necessary at the SWIOTLB level to match the IOMMU level. Thanks, Robin.