Received: by 2002:ab2:3319:0:b0:1ef:7a0f:c32d with SMTP id i25csp706258lqc; Fri, 8 Mar 2024 09:18:05 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWgr/eyrGotIN6yPXqHSIc+IItmRIpEPp2pR3THJGDgzsIzZwiT+hfs6QL7CYGkILFyx+JRPUCu7LqSjPMbL6PmOHBA5yNsndl5W4PpEQ== X-Google-Smtp-Source: AGHT+IGbiromtZiwNq7Z1a+k9WIlqNRL9VWN7z2xppJqXbrxGtkMNewJUx7AqUTdzcB88WVv44Q5 X-Received: by 2002:a05:6214:5609:b0:690:bde1:ab85 with SMTP id mg9-20020a056214560900b00690bde1ab85mr43353qvb.50.1709918285289; Fri, 08 Mar 2024 09:18:05 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709918285; cv=pass; d=google.com; s=arc-20160816; b=FOt43ZP/Usf+b08cvrwCLtqSKtAMD7C5jlxvkp7jbO2mMEjmbD7u0XUw3KFgsevooi FpOIQdfLB6Jzs2Ojy7go0EqWb1nEc8e0rooE8yx6QYJWFd1QPAPxHwdpI/dGh6M/nEDU zA3fPfMyRjLc/6kvB4VNJaAsqAkQOaOYZNiaMV1jOLoajNyxzNarISkS8iHmSb1xPhT7 p/ILwDuLe51OhIc0KjwAUnzNGsTuSH5BPoVZyVZxTSPPuzk55EhwK3QXr/KXKhIWavgg s2IvexIKnPss2fAKR+nx/yihQGzrSD4fxKfDbSsbiannx5W32vwcU8fXBZLo2Re3A8WV rJgg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=V5gprfkdEg/2U/QRMe/92RPJAn9gegQCiRk0wrtpadY=; fh=yKplN34Fs3dEE2rdKWUv0peEORS3pTOF1PpuEgeJlqU=; b=q+SL5eJcU8IyJXY9OsyB20RsvGTobyWOTL0krUNzASopGVSC+8WSx5YkoReEVLZTub H4wH/DGu2x4Ru1VguTxrDAxfFDtn+f8/SlLBtGSFsDqN9oH/mcAFXNb7d9V1i9xiRDXP 1zhjmf67mCFPgwcVnAVPLMRSqSrwqj/nd7mjujbasmMhFfAjg2UVK2Dme20G0T4bQ7LO Wd6y1f+MDhII+D2hycqXlsk3qDrdjG634OjtcLDyHFfIPzgVoto9bqP9LQpIC2fsHedh WLbB1uhsTd1x21gtChk7UN1eXytyCinRHq0W6/3pu9nBdpqj78vUSiXopeEaha/sGSMD fFCA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-97363-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-97363-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id q10-20020ad45caa000000b0068cd9703420si19097206qvh.332.2024.03.08.09.18.05 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Mar 2024 09:18:05 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-97363-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-97363-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-97363-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 0CCFF1C20E3D for ; Fri, 8 Mar 2024 17:18:05 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D35D028DAB; Fri, 8 Mar 2024 17:17:58 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8CDC118C3D for ; Fri, 8 Mar 2024 17:17:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709918278; cv=none; b=Y9utaLc7yB7P0d82Q9caasGXsyw4BwBn//VzdXQL8upZsmFrZWYWglofGRlk9mspmm7+19L7KoWXOAzMYgLo6xHlMKVizifDtJsvZNPXousvErOQRlODqMeBqP/HTuQ/UJq048CoaBMLQ4Vh4dRbQhJl3A2sTUoaFFbkmaXUC5w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709918278; c=relaxed/simple; bh=R0ZoDUsMvU/ouvoac+/ZPdcDX3tABzDhbGOcPnBYCZA=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=YlRU3fSz4iBHPVxAWgNum6/hMA/bx3WShbyP/RGg036kFubwMJ5A79TmVPeyinQm4tpN20XK+JMSJuvUIR6ftP5/EAUG41rRPvI/4gNmNWY1fiUlRw67kYvKQ/MhuEiIubyOvxQvJjhC8kGgZIAQ3mY7SawKsQoelmBn1qmzzug= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 760F8C15; Fri, 8 Mar 2024 09:18:31 -0800 (PST) Received: from [10.57.68.92] (unknown [10.57.68.92]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D09533F738; Fri, 8 Mar 2024 09:17:52 -0800 (PST) Message-ID: <9597a3ea-b71f-46bd-bc72-1d19e81dcfb5@arm.com> Date: Fri, 8 Mar 2024 17:17:51 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v6 6/6] swiotlb: Reinstate page-alignment for mappings >= PAGE_SIZE Content-Language: en-GB To: =?UTF-8?B?UGV0ciBUZXNhxZnDrWs=?= Cc: Will Deacon , linux-kernel@vger.kernel.org, kernel-team@android.com, iommu@lists.linux.dev, Christoph Hellwig , Marek Szyprowski , Petr Tesarik , Dexuan Cui , Nicolin Chen , Michael Kelley References: <20240308152829.25754-1-will@kernel.org> <20240308152829.25754-7-will@kernel.org> <5c7c7407-5356-4e12-a648-ae695fe0d1cb@arm.com> <20240308173816.5351ea58@meshulam.tesarici.cz> From: Robin Murphy In-Reply-To: <20240308173816.5351ea58@meshulam.tesarici.cz> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 2024-03-08 4:38 pm, Petr Tesařík wrote: > On Fri, 8 Mar 2024 16:08:01 +0000 > Robin Murphy wrote: > >> On 2024-03-08 3:28 pm, Will Deacon wrote: >>> For swiotlb allocations >= PAGE_SIZE, the slab search historically >>> adjusted the stride to avoid checking unaligned slots. This had the >>> side-effect of aligning large mapping requests to PAGE_SIZE, but that >>> was broken by 0eee5ae10256 ("swiotlb: fix slot alignment checks"). >>> >>> Since this alignment could be relied upon drivers, reinstate PAGE_SIZE >>> alignment for swiotlb mappings >= PAGE_SIZE. >> >> This seems clear enough to keep me happy now, thanks! And apologies that >> I managed to confuse even myself in the previous thread... >> >> Reviewed-by: Robin Murphy > > I thought we agreed that this stricter alignment is unnecessary: > > https://lore.kernel.org/linux-iommu/20240305140833.GC3659@lst.de/ No, that was about dma_alloc_coherent() again (and TBH I'm not sure we should actually relax it anyway, since there definitely are callers who rely on size-alignment beyond PAGE_SIZE, however they're typically going to be using the common implementations which end up in alloc_pages() or CMA and so do offer that, rather than the oddball ones which don't - e.g. we're never going to be allocating SMMUv3 Stream Tables out of some restricted pool via the emergency swiotlb_alloc() path). If anywhere, the place to argue that point would be patch #3 (which as mentioned I'd managed to forget about before...) This one's just about preserving a SWIOTLB-specific behaviour which has the practical effect of making SWIOTLB a bit less visible to dma_map_*() callers. The impact of keeping this is fairly low, so seems preferable to the risk of facing issues 2 or 3 years down the line when someone finally upgrades their distro and their data gets eaten because it turns out some obscure driver should really have been updated to use min_align_mask. Thanks, Robin. > But if everybody else wants to have it... > > Petr T > >>> Reported-by: Michael Kelley >>> Signed-off-by: Will Deacon >>> --- >>> kernel/dma/swiotlb.c | 18 +++++++++++------- >>> 1 file changed, 11 insertions(+), 7 deletions(-) >>> >>> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c >>> index c381a7ed718f..c5851034523f 100644 >>> --- a/kernel/dma/swiotlb.c >>> +++ b/kernel/dma/swiotlb.c >>> @@ -992,6 +992,17 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool >>> BUG_ON(!nslots); >>> BUG_ON(area_index >= pool->nareas); >>> >>> + /* >>> + * Historically, swiotlb allocations >= PAGE_SIZE were guaranteed to be >>> + * page-aligned in the absence of any other alignment requirements. >>> + * 'alloc_align_mask' was later introduced to specify the alignment >>> + * explicitly, however this is passed as zero for streaming mappings >>> + * and so we preserve the old behaviour there in case any drivers are >>> + * relying on it. >>> + */ >>> + if (!alloc_align_mask && !iotlb_align_mask && alloc_size >= PAGE_SIZE) >>> + alloc_align_mask = PAGE_SIZE - 1; >>> + >>> /* >>> * Ensure that the allocation is at least slot-aligned and update >>> * 'iotlb_align_mask' to ignore bits that will be preserved when >>> @@ -1006,13 +1017,6 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool >>> */ >>> stride = get_max_slots(max(alloc_align_mask, iotlb_align_mask)); >>> >>> - /* >>> - * For allocations of PAGE_SIZE or larger only look for page aligned >>> - * allocations. >>> - */ >>> - if (alloc_size >= PAGE_SIZE) >>> - stride = umax(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1); >>> - >>> spin_lock_irqsave(&area->lock, flags); >>> if (unlikely(nslots > pool->area_nslabs - area->used)) >>> goto not_found; >> >