Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B70B2C05027 for ; Thu, 9 Feb 2023 04:34:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230321AbjBIEeP (ORCPT ); Wed, 8 Feb 2023 23:34:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230076AbjBIEdl (ORCPT ); Wed, 8 Feb 2023 23:33:41 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90EC8485A3; Wed, 8 Feb 2023 20:32:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675917152; x=1707453152; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nuvtSsjfgtshayUX3b2mNkbueVkgFv+rHHiWUBqIZho=; b=Z/2pjIg5vzJH3b+wMvIFX20mYab0sTf0MxAkHw8gvFgWKBklrn+IUw1M 4Cb5FCbspNEkeCBv+R7z7BqNAqAu2Syl67iwBwEhCA/V72JEL3Lisjgn8 P/F5/Yh+dzpgogokbvTvhuFytYSDtRxREn6V5AINTsW4idM0nqUR8MbKm jvul+blZZ/U28cNr5VVYMDEkRo7mhKh0In1zfS1ZR64ndhsd4xWtjdEcw 4I6uyGHacjcPofLAbYwu6iYEuFSma0MGTy4B42nN0bg5TMJYTROCPwpD4 uevLjZQbG44flpWTCFfyF1yhcuuahgmWp1oThHQT0udOltuyYnAHlv1g9 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10615"; a="331298609" X-IronPort-AV: E=Sophos;i="5.97,281,1669104000"; d="scan'208";a="331298609" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2023 20:32:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10615"; a="669447449" X-IronPort-AV: E=Sophos;i="5.97,281,1669104000"; d="scan'208";a="669447449" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by fmsmga007.fm.intel.com with ESMTP; 08 Feb 2023 20:32:00 -0800 From: Yi Liu To: joro@8bytes.org, alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com Cc: cohuck@redhat.com, eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com, lulu@redhat.com, suravee.suthikulpanit@amd.com, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, baolu.lu@linux.intel.com Subject: [PATCH 02/17] iommu: Add nested domain support Date: Wed, 8 Feb 2023 20:31:38 -0800 Message-Id: <20230209043153.14964-3-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230209043153.14964-1-yi.l.liu@intel.com> References: <20230209043153.14964-1-yi.l.liu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Lu Baolu Introduce a new domain type for a user space I/O address, which is nested on top of another user space address represented by a UNMANAGED domain. The mappings of a nested domain are managed by user space software, therefore it's unnecessary to have map/unmap callbacks. But the updates of the PTEs in the nested domain page table must be propagated to the caches on both IOMMU (IOTLB) and devices (DevTLB). The nested domain is allocated by the domain_alloc_user op, and attached to the device through the existing iommu_attach_device/group() interfaces. An new domain op, named iotlb_sync_user is added for the userspace to flush the hardware caches for a nested domain through iommufd. No wrapper for it as it's only supposed to be used by iommufd. Signed-off-by: Lu Baolu Signed-off-by: Nicolin Chen Signed-off-by: Yi Liu --- include/linux/iommu.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 214e3eb9bc86..f6db50f85a20 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -67,6 +67,9 @@ struct iommu_domain_geometry { #define __IOMMU_DOMAIN_SVA (1U << 4) /* Shared process address space */ +#define __IOMMU_DOMAIN_NESTED (1U << 5) /* User-managed IOVA nested on + a stage-2 translation */ + /* * This are the possible domain-types * @@ -92,6 +95,7 @@ struct iommu_domain_geometry { __IOMMU_DOMAIN_DMA_API | \ __IOMMU_DOMAIN_DMA_FQ) #define IOMMU_DOMAIN_SVA (__IOMMU_DOMAIN_SVA) +#define IOMMU_DOMAIN_NESTED (__IOMMU_DOMAIN_NESTED) struct iommu_domain { unsigned type; @@ -321,6 +325,7 @@ struct iommu_ops { * @iotlb_sync_map: Sync mappings created recently using @map to the hardware * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush * queue + * @iotlb_sync_user: Flush hardware TLBs caching user space IO mappings * @iova_to_phys: translate iova to physical address * @enforce_cache_coherency: Prevent any kind of DMA from bypassing IOMMU_CACHE, * including no-snoop TLPs on PCIe or other platform @@ -350,6 +355,8 @@ struct iommu_domain_ops { size_t size); void (*iotlb_sync)(struct iommu_domain *domain, struct iommu_iotlb_gather *iotlb_gather); + void (*iotlb_sync_user)(struct iommu_domain *domain, + void *user_data); phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova); -- 2.34.1