Received: by 2002:a05:6a10:c604:0:0:0:0 with SMTP id y4csp371683pxt; Wed, 4 Aug 2021 13:28:25 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwwfYUmxnlUSZTIPX0FhXX/rH563oz2h5EoHW/btB4nldy6tbJpJ9qc0qZ/k5RiHvFXHW/8 X-Received: by 2002:a05:6402:14da:: with SMTP id f26mr1846601edx.259.1628108905645; Wed, 04 Aug 2021 13:28:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628108905; cv=none; d=google.com; s=arc-20160816; b=jbyOKdorJ7wJaRquVO3aajCxS0oc7SYCZjvU0rw9BEU8f3iK4sMqiFAejPy1R6bw24 QiuVFtXcX+ZsTSyS9j/r5vjTfYBmidjCOsOju6YeJvCrOVMSUFTQEk4UV9vlcIbmycOs tUI1TEVdRTjIckYImhGkKnxaHdyL2LLeydyG/OdGXINOWdOE87ZyUamj+K3lRbLrameu Ecc59tD2vWGYAK35PZjsGnlrtJHNwZjTW7YYOttTIMXt78eyKZSeO4aOcUltbgK+9SOJ BB6s1N5wEWDxGNSkhhsZBK3lbFiz84DyVZhY1MdyTs6eVRKnzt2idhyWPBUUWa6c0gWW k6jQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=GmyAn07sMWWZ2DwKhSOUyWZNT6Rw0yNnif4/HvAS18I=; b=V7vZq1lbXKGcs3LYVehRQa9otND3dBSszWqyJj/hIeMCKipC9CJlQZpmHTlaFgE2FB yMWaVmYPYyp2B19daT4aWA+HV6EAYALNf2OD39NGwoHzqirKtqSrXcs3eax1A62dF7WH tLlNAOPMfhWWf9e/Z7AudHTLanXu8DsypeBijR2o8LoDEyQ7OkyLlTHvcTXQxoPxUbjq iyQ25EcMspXGHGEIdmSGYPMqYnG4wnLSUpWgcDwl2d1aqxBBcC8vsCOdUB8BPQcDOA9x utFaCbIQR/XnHdcP3C2Cn8jzQlTfSpK+FdfrmXaOS4KC7UOI7QTDaEGbduolcnt2xR+w YnoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i24si3001034edv.114.2021.08.04.13.27.05; Wed, 04 Aug 2021 13:28:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239879AbhHDRRI (ORCPT + 99 others); Wed, 4 Aug 2021 13:17:08 -0400 Received: from foss.arm.com ([217.140.110.172]:35084 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239926AbhHDRQu (ORCPT ); Wed, 4 Aug 2021 13:16:50 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 00BED1424; Wed, 4 Aug 2021 10:16:38 -0700 (PDT) Received: from 010265703453.arm.com (unknown [10.57.36.146]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 48A5A3F66F; Wed, 4 Aug 2021 10:16:36 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org, will@kernel.org Cc: iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, suravee.suthikulpanit@amd.com, baolu.lu@linux.intel.com, john.garry@huawei.com, dianders@chromium.org, rajatja@google.com, chenxiang66@hisilicon.com Subject: [PATCH v3 14/25] iommu: Indicate queued flushes via gather data Date: Wed, 4 Aug 2021 18:15:42 +0100 Message-Id: <8f7cd9a8114e3b3f44231849a3b46e7c75cec25e.1628094601.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since iommu_iotlb_gather exists to help drivers optimise flushing for a given unmap request, it is also the logical place to indicate whether the unmap is strict or not, and thus help them further optimise for whether to expect a sync or a flush_all subsequently. As part of that, it also seems fair to make the flush queue code take responsibility for enforcing the really subtle ordering requirement it brings, so that we don't need to worry about forgetting that if new drivers want to add flush queue support, and can consolidate the existing versions. While we're adding to the kerneldoc, also fill in some info for @freelist which was overlooked previously. Signed-off-by: Robin Murphy --- v3: New --- drivers/iommu/dma-iommu.c | 1 + drivers/iommu/iova.c | 7 +++++++ include/linux/iommu.h | 8 +++++++- 3 files changed, 15 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index e28396cea6eb..d63b30a7dc82 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -474,6 +474,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, dma_addr -= iova_off; size = iova_align(iovad, size + iova_off); iommu_iotlb_gather_init(&iotlb_gather); + iotlb_gather.queued = cookie->fq_domain; unmapped = iommu_unmap_fast(domain, dma_addr, size, &iotlb_gather); WARN_ON(unmapped != size); diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index b6cf5f16123b..2ad73fb2e94e 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -637,6 +637,13 @@ void queue_iova(struct iova_domain *iovad, unsigned long flags; unsigned idx; + /* + * Order against the IOMMU driver's pagetable update from unmapping + * @pte, to guarantee that iova_domain_flush() observes that if called + * from a different CPU before we release the lock below. + */ + smp_wmb(); + spin_lock_irqsave(&fq->lock, flags); /* diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 141779d76035..f7679f6684b1 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -161,16 +161,22 @@ enum iommu_dev_features { * @start: IOVA representing the start of the range to be flushed * @end: IOVA representing the end of the range to be flushed (inclusive) * @pgsize: The interval at which to perform the flush + * @freelist: Removed pages to free after sync + * @queued: Indicates that the flush will be queued * * This structure is intended to be updated by multiple calls to the * ->unmap() function in struct iommu_ops before eventually being passed - * into ->iotlb_sync(). + * into ->iotlb_sync(). Drivers can add pages to @freelist to be freed after + * ->iotlb_sync() or ->iotlb_flush_all() have cleared all cached references to + * them. @queued is set to indicate when ->iotlb_flush_all() will be called + * later instead of ->iotlb_sync(), so drivers may optimise accordingly. */ struct iommu_iotlb_gather { unsigned long start; unsigned long end; size_t pgsize; struct page *freelist; + bool queued; }; /** -- 2.25.1