Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp4553275ybb; Tue, 14 Apr 2020 09:29:32 -0700 (PDT) X-Google-Smtp-Source: APiQypLfYs48JSPBbkOWd0Zoy0g7cm3D6RzIMqayTWBIicwxKS1J6qA7ft3QMQAN+vPFAPm0CmuG X-Received: by 2002:a17:906:5003:: with SMTP id s3mr944423ejj.266.1586881772062; Tue, 14 Apr 2020 09:29:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586881772; cv=none; d=google.com; s=arc-20160816; b=ePxhizXlCZBcUCSGkakje+P3xK77WtPijcJKwsMnVFHG/6TRL1wX2BBcq/dpb/B8NJ 8GQQC2OYWaG7d1lwW03Tn0TbUAyWNMoeEJuyvFL2QnsmDW57nCah7rCFTCiR2U/5pQmY fhzP2f/GZ52ixPxbdnskPid5pQWxydlzYLXjebO0LRgi+qtla9URuEGnhWvOETSTyFIF A1SmXU8T/Be4IdB8F+0yq3o6lh3m3WBiqS0YWSwjBtHaersx3SaEdXWNGaGtFLqGMHD6 50u7J0MyvlRAros2VsCfAuujRHg58Q4ELncz4699F35Z3ed2bXy6snWUin24+G5ep7sb 1cow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=WC+tfqpu/LtSl9uq512H/Gna4GxPm52q0Iaagkim+2A=; b=aRF9AWWaAHrat2RrtMKnHwmAKwdh9uKOrPWwQQQ2CG0sqB5oiohYovQRJYicXfLlxd oPD0FFdjO9Dc/e0rVH5zNK4wBEghlNgiN7GeZPAVcQ0q7adofcCNojDz05WYwIQh2K5/ FxY3t8dPMAvyNuMJ/64nyaQOlPaNdVq4EbpYz+vIMlOViRPhiMGSRA0anKBIv/MGSigV JyLdoa9ozYnqWr1K94dLgAj2l68sP7m4s3BLsXCKwNRME9X6QkwWwo2zDITWF5ft3ty2 r00/puPHPvQMvo2YXSufRTO/NNKPgIhSgpfeC7MvzpxcqwnL9p+V2BoI7FPpHhQ9ZMmw T4Aw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VTKfXMIw; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c24si8692782eds.119.2020.04.14.09.29.07; Tue, 14 Apr 2020 09:29:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VTKfXMIw; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2440191AbgDNPHY (ORCPT + 99 others); Tue, 14 Apr 2020 11:07:24 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:21645 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2407300AbgDNPGw (ORCPT ); Tue, 14 Apr 2020 11:06:52 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1586876810; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WC+tfqpu/LtSl9uq512H/Gna4GxPm52q0Iaagkim+2A=; b=VTKfXMIwGhva8wwL5Dvqo3HrsHNY0niWl7rXrOqgpET9zgKiezS879w3WfnGzgSx0B9aR9 7/XO4D4few4EmuMB2eMo5FJO0A9aJ0ZpteXg+h3MwRNCqZr3PeHPmUst5tTHTYzU5X7eik hKSKBTv6lGULCJCUAVm2XsUpQqwal3g= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-286-f0MvrlDzP3S_xnYaeon4rg-1; Tue, 14 Apr 2020 11:06:46 -0400 X-MC-Unique: f0MvrlDzP3S_xnYaeon4rg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2A3081088381; Tue, 14 Apr 2020 15:06:44 +0000 (UTC) Received: from laptop.redhat.com (ovpn-115-53.ams2.redhat.com [10.36.115.53]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9146D19C69; Tue, 14 Apr 2020 15:06:39 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, joro@8bytes.org, maz@kernel.org, robin.murphy@arm.com Cc: jean-philippe@linaro.org, zhangfei.gao@linaro.org, shameerali.kolothum.thodi@huawei.com, alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, peter.maydell@linaro.org, zhangfei.gao@gmail.com, tn@semihalf.com, zhangfei.gao@foxmail.com, bbhushan2@marvell.com Subject: [PATCH v11 02/13] iommu: Introduce bind/unbind_guest_msi Date: Tue, 14 Apr 2020 17:05:56 +0200 Message-Id: <20200414150607.28488-3-eric.auger@redhat.com> In-Reply-To: <20200414150607.28488-1-eric.auger@redhat.com> References: <20200414150607.28488-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On ARM, MSI are translated by the SMMU. An IOVA is allocated for each MSI doorbell. If both the host and the guest are exposed with SMMUs, we end up with 2 different IOVAs allocated by each. guest allocates an IOVA (gIOVA) to map onto the guest MSI doorbell (gDB). The Host allocates another IOVA (hIOVA) to map onto the physical doorbell (hDB). So we end up with 2 untied mappings: S1 S2 gIOVA -> gDB hIOVA -> hDB Currently the PCI device is programmed by the host with hIOVA as MSI doorbell. So this does not work. This patch introduces an API to pass gIOVA/gDB to the host so that gIOVA can be reused by the host instead of re-allocating a new IOVA. So the goal is to create the following nested mapping: S1 S2 gIOVA -> gDB -> hDB and program the PCI device with gIOVA MSI doorbell. In case we have several devices attached to this nested domain (devices belonging to the same group), they cannot be isolated on guest side either. So they should also end up in the same domain on guest side. We will enforce that all the devices attached to the host iommu domain use the same physical doorbell and similarly a single virtual doorbell mapping gets registered (1 single virtual doorbell is used on guest as well). Signed-off-by: Eric Auger --- v7 -> v8: - dummy iommu_unbind_guest_msi turned into a void function v6 -> v7: - remove the device handle parameter. - Add comments saying there can only be a single MSI binding registered per iommu_domain v5 -> v6: -fix compile issue when IOMMU_API is not set v3 -> v4: - add unbind v2 -> v3: - add a struct device handle --- drivers/iommu/iommu.c | 37 +++++++++++++++++++++++++++++++++++++ include/linux/iommu.h | 20 ++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index b71ad56f8c99..16068bd4d47b 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1756,6 +1756,43 @@ static void __iommu_detach_device(struct iommu_dom= ain *domain, trace_detach_device_from_domain(dev); } =20 +/** + * iommu_bind_guest_msi - Passes the stage1 GIOVA/GPA mapping of a + * virtual doorbell + * + * @domain: iommu domain the stage 1 mapping will be attached to + * @iova: iova allocated by the guest + * @gpa: guest physical address of the virtual doorbell + * @size: granule size used for the mapping + * + * The associated IOVA can be reused by the host to create a nested + * stage2 binding mapping translating into the physical doorbell used + * by the devices attached to the domain. + * + * All devices within the domain must share the same physical doorbell. + * A single MSI GIOVA/GPA mapping can be attached to an iommu_domain. + */ + +int iommu_bind_guest_msi(struct iommu_domain *domain, + dma_addr_t giova, phys_addr_t gpa, size_t size) +{ + if (unlikely(!domain->ops->bind_guest_msi)) + return -ENODEV; + + return domain->ops->bind_guest_msi(domain, giova, gpa, size); +} +EXPORT_SYMBOL_GPL(iommu_bind_guest_msi); + +void iommu_unbind_guest_msi(struct iommu_domain *domain, + dma_addr_t iova) +{ + if (unlikely(!domain->ops->unbind_guest_msi)) + return; + + domain->ops->unbind_guest_msi(domain, iova); +} +EXPORT_SYMBOL_GPL(iommu_unbind_guest_msi); + void iommu_detach_device(struct iommu_domain *domain, struct device *dev= ) { struct iommu_group *group; diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 3e1057c3585a..31b3c74f5fe2 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -250,6 +250,8 @@ struct iommu_iotlb_gather { * @sva_unbind_gpasid: unbind guest pasid and mm * @attach_pasid_table: attach a pasid table * @detach_pasid_table: detach the pasid table + * @bind_guest_msi: provides a stage1 giova/gpa MSI doorbell mapping + * @unbind_guest_msi: withdraw a stage1 giova/gpa MSI doorbell mapping * @pgsize_bitmap: bitmap of all possible supported page sizes * @owner: Driver module providing these ops */ @@ -323,6 +325,10 @@ struct iommu_ops { =20 int (*sva_unbind_gpasid)(struct device *dev, int pasid); =20 + int (*bind_guest_msi)(struct iommu_domain *domain, + dma_addr_t giova, phys_addr_t gpa, size_t size); + void (*unbind_guest_msi)(struct iommu_domain *domain, dma_addr_t giova)= ; + unsigned long pgsize_bitmap; struct module *owner; }; @@ -454,6 +460,10 @@ extern int iommu_sva_unbind_gpasid(struct iommu_doma= in *domain, extern int iommu_attach_pasid_table(struct iommu_domain *domain, struct iommu_pasid_table_config *cfg); extern void iommu_detach_pasid_table(struct iommu_domain *domain); +extern int iommu_bind_guest_msi(struct iommu_domain *domain, + dma_addr_t giova, phys_addr_t gpa, size_t size); +extern void iommu_unbind_guest_msi(struct iommu_domain *domain, + dma_addr_t giova); extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev)= ; extern struct iommu_domain *iommu_get_dma_domain(struct device *dev); extern int iommu_map(struct iommu_domain *domain, unsigned long iova, @@ -1110,6 +1120,16 @@ static inline struct iommu_fwspec *dev_iommu_fwspe= c_get(struct device *dev) { return NULL; } + +static inline +int iommu_bind_guest_msi(struct iommu_domain *domain, + dma_addr_t giova, phys_addr_t gpa, size_t size) +{ + return -ENODEV; +} +static inline +void iommu_unbind_guest_msi(struct iommu_domain *domain, dma_addr_t giov= a) {} + #endif /* CONFIG_IOMMU_API */ =20 #ifdef CONFIG_IOMMU_DEBUGFS --=20 2.20.1