Received: by 2002:a25:b323:0:0:0:0:0 with SMTP id l35csp2709063ybj; Mon, 23 Sep 2019 08:10:40 -0700 (PDT) X-Google-Smtp-Source: APXvYqzoQrsU/+UT1R/u1m9i8E8nu9lupTz4novFDgD8Pwa2fkTCr244Pc4rHOs1i+X3JYR0pSuA X-Received: by 2002:a50:a532:: with SMTP id y47mr516111edb.273.1569251439901; Mon, 23 Sep 2019 08:10:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569251439; cv=none; d=google.com; s=arc-20160816; b=NImww3ipBwaUlXC2aYtpC61gN3RmgWL8XUHRwU17d5VD6x379eV97f0x/2YJf0rEbV T90t0p3CBeh0CbtKlkcifGEoCufg5QI8qV2pXfq5vWr++dkHRgr8+Ze6hHDVN/AnctuS xr0VWEj5v/yO0zoFN7e89I3gzZP+swnlI8xhraC5DFDv+JLs9Dojcey0eAcrp0nfrV+8 wu0SjarlPVyZnkbqFebWoV/jyc3LAU5PPAQHqYG2d3RtjWSCL8/Q4YAD+a7fDJ/212yL XJuWRdNtFGGl3lBE8ILnY8rD8l6hVII0sLxvC+XVxS1h2CLn3ARGFCt1u6MRDNuVkDJN LkIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=OsK0Ghg26Dj4IWvRdiged5s4+fUVwI3kJPEF1g7snu4=; b=Dbcw8IjuGA7CSMijEZW0NM7Z0eKO51ykPL1Y/ZaDZpB6j7jg85fT/UyJLWCYqobrM5 TsIH1cuPYD5c8u/a4csoo6sQ+OFYfNJFAfkBWTvwy6JAUMp8vPD7+eIsg85XRr6NVhyE pgDWnoraMY9IFpoTRnbj0ccgD1QMCBmGGN5Hzx+soQKQYCM0ekAuqFH36Qyei0dxj6Av XiEuO2nhEd5vuExgbEU2xHYi/avj5W9j+OTxudQUBwC0Nvh/nCMRFx8zWexETmztPj7A /fYYRTPu7V3Zi04ulnsNtpCf2GzSl+dDil2aMJfcw4/1it0ukrzMo/87afTGavdViAp6 743A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id qh28si5062226ejb.210.2019.09.23.08.10.15; Mon, 23 Sep 2019 08:10:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726956AbfIVADv (ORCPT + 99 others); Sat, 21 Sep 2019 20:03:51 -0400 Received: from mga04.intel.com ([192.55.52.120]:43336 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726791AbfIVADs (ORCPT ); Sat, 21 Sep 2019 20:03:48 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Sep 2019 17:03:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,534,1559545200"; d="scan'208";a="192716866" Received: from jacob-builder.jf.intel.com ([10.7.199.155]) by orsmga006.jf.intel.com with ESMTP; 21 Sep 2019 17:03:47 -0700 From: Jacob Pan To: iommu@lists.linux-foundation.org, LKML , Joerg Roedel , David Woodhouse , Alex Williamson , Jean-Philippe Brucker Cc: "Yi Liu" , "Tian, Kevin" , Raj Ashok , "Christoph Hellwig" , "Lu Baolu" , Jonathan Cameron , Eric Auger , Jacob Pan Subject: [PATCH v2 1/4] iommu: Introduce cache_invalidate API Date: Sat, 21 Sep 2019 17:07:47 -0700 Message-Id: <1569110870-12603-2-git-send-email-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1569110870-12603-1-git-send-email-jacob.jun.pan@linux.intel.com> References: <1569110870-12603-1-git-send-email-jacob.jun.pan@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Yi L Liu In any virtualization use case, when the first translation stage is "owned" by the guest OS, the host IOMMU driver has no knowledge of caching structure updates unless the guest invalidation activities are trapped by the virtualizer and passed down to the host. Since the invalidation data can be obtained from user space and will be written into physical IOMMU, we must allow security check at various layers. Therefore, generic invalidation data format are proposed here, model specific IOMMU drivers need to convert them into their own format. Signed-off-by: Yi L Liu Signed-off-by: Jacob Pan Signed-off-by: Ashok Raj Signed-off-by: Eric Auger Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/iommu.c | 10 +++++ include/linux/iommu.h | 14 ++++++ include/uapi/linux/iommu.h | 110 +++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 134 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 0c674d80c37f..e27dec2d39b8 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1610,6 +1610,16 @@ int iommu_attach_device(struct iommu_domain *domain, struct device *dev) } EXPORT_SYMBOL_GPL(iommu_attach_device); +int iommu_cache_invalidate(struct iommu_domain *domain, struct device *dev, + struct iommu_cache_invalidate_info *inv_info) +{ + if (unlikely(!domain->ops->cache_invalidate)) + return -ENODEV; + + return domain->ops->cache_invalidate(domain, dev, inv_info); +} +EXPORT_SYMBOL_GPL(iommu_cache_invalidate); + static void __iommu_detach_device(struct iommu_domain *domain, struct device *dev) { diff --git a/include/linux/iommu.h b/include/linux/iommu.h index fdc355ccc570..cf8b504966b0 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -228,6 +228,7 @@ struct iommu_sva_ops { * @sva_unbind: Unbind process address space from device * @sva_get_pasid: Get PASID associated to a SVA handle * @page_response: handle page request response + * @cache_invalidate: invalidate translation caches * @pgsize_bitmap: bitmap of all possible supported page sizes */ struct iommu_ops { @@ -291,6 +292,8 @@ struct iommu_ops { int (*page_response)(struct device *dev, struct iommu_fault_event *evt, struct iommu_page_response *msg); + int (*cache_invalidate)(struct iommu_domain *domain, struct device *dev, + struct iommu_cache_invalidate_info *inv_info); unsigned long pgsize_bitmap; }; @@ -395,6 +398,9 @@ extern int iommu_attach_device(struct iommu_domain *domain, struct device *dev); extern void iommu_detach_device(struct iommu_domain *domain, struct device *dev); +extern int iommu_cache_invalidate(struct iommu_domain *domain, + struct device *dev, + struct iommu_cache_invalidate_info *inv_info); extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev); extern struct iommu_domain *iommu_get_dma_domain(struct device *dev); extern int iommu_map(struct iommu_domain *domain, unsigned long iova, @@ -937,6 +943,14 @@ static inline int iommu_sva_get_pasid(struct iommu_sva *handle) return IOMMU_PASID_INVALID; } +static inline int +iommu_cache_invalidate(struct iommu_domain *domain, + struct device *dev, + struct iommu_cache_invalidate_info *inv_info) +{ + return -ENODEV; +} + #endif /* CONFIG_IOMMU_API */ #ifdef CONFIG_IOMMU_DEBUGFS diff --git a/include/uapi/linux/iommu.h b/include/uapi/linux/iommu.h index fc00c5d4741b..f3e96214df8e 100644 --- a/include/uapi/linux/iommu.h +++ b/include/uapi/linux/iommu.h @@ -152,4 +152,114 @@ struct iommu_page_response { __u32 code; }; +/* defines the granularity of the invalidation */ +enum iommu_inv_granularity { + IOMMU_INV_GRANU_DOMAIN, /* domain-selective invalidation */ + IOMMU_INV_GRANU_PASID, /* PASID-selective invalidation */ + IOMMU_INV_GRANU_ADDR, /* page-selective invalidation */ + IOMMU_INV_GRANU_NR, /* number of invalidation granularities */ +}; + +/** + * struct iommu_inv_addr_info - Address Selective Invalidation Structure + * + * @flags: indicates the granularity of the address-selective invalidation + * - If the PASID bit is set, the @pasid field is populated and the invalidation + * relates to cache entries tagged with this PASID and matching the address + * range. + * - If ARCHID bit is set, @archid is populated and the invalidation relates + * to cache entries tagged with this architecture specific ID and matching + * the address range. + * - Both PASID and ARCHID can be set as they may tag different caches. + * - If neither PASID or ARCHID is set, global addr invalidation applies. + * - The LEAF flag indicates whether only the leaf PTE caching needs to be + * invalidated and other paging structure caches can be preserved. + * @pasid: process address space ID + * @archid: architecture-specific ID + * @addr: first stage/level input address + * @granule_size: page/block size of the mapping in bytes + * @nb_granules: number of contiguous granules to be invalidated + */ +struct iommu_inv_addr_info { +#define IOMMU_INV_ADDR_FLAGS_PASID (1 << 0) +#define IOMMU_INV_ADDR_FLAGS_ARCHID (1 << 1) +#define IOMMU_INV_ADDR_FLAGS_LEAF (1 << 2) + __u32 flags; + __u32 archid; + __u64 pasid; + __u64 addr; + __u64 granule_size; + __u64 nb_granules; +}; + +/** + * struct iommu_inv_pasid_info - PASID Selective Invalidation Structure + * + * @flags: indicates the granularity of the PASID-selective invalidation + * - If the PASID bit is set, the @pasid field is populated and the invalidation + * relates to cache entries tagged with this PASID and matching the address + * range. + * - If the ARCHID bit is set, the @archid is populated and the invalidation + * relates to cache entries tagged with this architecture specific ID and + * matching the address range. + * - Both PASID and ARCHID can be set as they may tag different caches. + * - At least one of PASID or ARCHID must be set. + * @pasid: process address space ID + * @archid: architecture-specific ID + */ +struct iommu_inv_pasid_info { +#define IOMMU_INV_PASID_FLAGS_PASID (1 << 0) +#define IOMMU_INV_PASID_FLAGS_ARCHID (1 << 1) + __u32 flags; + __u32 archid; + __u64 pasid; +}; + +/** + * struct iommu_cache_invalidate_info - First level/stage invalidation + * information + * @version: API version of this structure + * @cache: bitfield that allows to select which caches to invalidate + * @granularity: defines the lowest granularity used for the invalidation: + * domain > PASID > addr + * @padding: reserved for future use (should be zero) + * @pasid_info: invalidation data when @granularity is %IOMMU_INV_GRANU_PASID + * @addr_info: invalidation data when @granularity is %IOMMU_INV_GRANU_ADDR + * + * Not all the combinations of cache/granularity are valid: + * + * +--------------+---------------+---------------+---------------+ + * | type / | DEV_IOTLB | IOTLB | PASID | + * | granularity | | | cache | + * +==============+===============+===============+===============+ + * | DOMAIN | N/A | Y | Y | + * +--------------+---------------+---------------+---------------+ + * | PASID | Y | Y | Y | + * +--------------+---------------+---------------+---------------+ + * | ADDR | Y | Y | N/A | + * +--------------+---------------+---------------+---------------+ + * + * Invalidations by %IOMMU_INV_GRANU_DOMAIN don't take any argument other than + * @version and @cache. + * + * If multiple cache types are invalidated simultaneously, they all + * must support the used granularity. + */ +struct iommu_cache_invalidate_info { +#define IOMMU_CACHE_INVALIDATE_INFO_VERSION_1 1 + __u32 version; +/* IOMMU paging structure cache */ +#define IOMMU_CACHE_INV_TYPE_IOTLB (1 << 0) /* IOMMU IOTLB */ +#define IOMMU_CACHE_INV_TYPE_DEV_IOTLB (1 << 1) /* Device IOTLB */ +#define IOMMU_CACHE_INV_TYPE_PASID (1 << 2) /* PASID cache */ +#define IOMMU_CACHE_INV_TYPE_NR (3) + __u8 cache; + __u8 granularity; + __u8 padding[2]; + union { + struct iommu_inv_pasid_info pasid_info; + struct iommu_inv_addr_info addr_info; + }; +}; + #endif /* _UAPI_IOMMU_H */ -- 2.7.4