Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp7401739ybl; Wed, 15 Jan 2020 22:15:51 -0800 (PST) X-Google-Smtp-Source: APXvYqzHCMBwkU7/putZT+n18UpU8BiW9yGq1Sg5AWmTYvY3jz4SCu+/6RyBQKL958kb26vVNA5M X-Received: by 2002:a05:6830:1e09:: with SMTP id s9mr799483otr.139.1579155351176; Wed, 15 Jan 2020 22:15:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1579155351; cv=none; d=google.com; s=arc-20160816; b=vcdck/EgJY6xMLyyvpOIBWyaMZDKJDZar9/Rg5YU5j5qBJGAq9Rd8iQcz8LmExaveH /TuH9tZBkY2Bug19Wk+jdsjGhKAbxYw6n1BgjBQdkqYQo2ZCWtp7SnJUl0OLbW5ZJJSE 03NoGRcfp7wjaPCka6l47MSHbaoDq8cojaZIexmWLVh639U+wxQcSiuzyscxK09U4Qja Atcn2jTkZ47MQLoY1kCUe9zJ8i4ovBUbF/DgxXVRnNGP5phO3vYS6bYkeWUtjHQ+zovj V0RW9H5Q2ib5sJ3lbN5rvu4iinWfKP2Lrm6D26aPHTFCYGFLustHh1UFs4XMgC/iDgim gOWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :organization:references:in-reply-to:message-id:subject:cc:to:from :date:dkim-signature; bh=fnpkRDpghqTM1NwZ+1iPPc1hcGS89vCuP7+AdZ3c4v4=; b=qmLuLEiBePc3an1M+xaBe7TRgENwK215qhtF65YGpqcvOQqF2OrQigHYu44MzaZl16 V/rZh+30ZC078b95rP9aJi6iTzLUp4/9cJguaUJjDh8ovXVOksFVMqc1I91b28QIfzIN ky1Q/SdkvZU5/dPAEQgXmMrjYud716WP8Rwb15EUmBHAMGI1YIGUg0mWtepabTVBtE+G 6inilD0TLHTYQHtgyDBaGL69Mxe/TpM36kkmMg9wWt1lqBUu5FxwLpNPkm7nm/oPAAlg kgdlEFe9IjDAQM12pCxhqayPs1MQyRKIcrDsX8AHtQt3RIZayRwdKYziNSYcXRJp37dL VwuA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Y2wrTsHl; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j9si12641188otn.294.2020.01.15.22.15.39; Wed, 15 Jan 2020 22:15:51 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Y2wrTsHl; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730481AbgAPD6f (ORCPT + 99 others); Wed, 15 Jan 2020 22:58:35 -0500 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:28584 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729110AbgAPD6f (ORCPT ); Wed, 15 Jan 2020 22:58:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1579147113; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fnpkRDpghqTM1NwZ+1iPPc1hcGS89vCuP7+AdZ3c4v4=; b=Y2wrTsHlqkiDLOBvQvZLw9I38bxprVJmcit7zZNQkGqQD2MCv8TM+neqiAvi/J/l+gs2+7 2Wpa/0ZxrfAHwCz/aAVcK7Rpx3Ty87tXaGfBMgN77Gm4f930Gy/ToSN+yhoBrELTkwjl/K F5CaM9ORSeGoPp9AEpNj3vL+kN7/diU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-411-JWNlj9uUPTWbRf_tHKI_-g-1; Wed, 15 Jan 2020 22:58:32 -0500 X-MC-Unique: JWNlj9uUPTWbRf_tHKI_-g-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 18125183B532; Thu, 16 Jan 2020 03:58:31 +0000 (UTC) Received: from x1.home (ovpn-116-28.phx2.redhat.com [10.3.116.28]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9BEA25C28C; Thu, 16 Jan 2020 03:58:27 +0000 (UTC) Date: Wed, 15 Jan 2020 20:58:27 -0700 From: Alex Williamson To: Mika =?UTF-8?B?UGVudHRpbMOk?= Cc: Yan Zhao , "zhenyuw@linux.intel.com" , "intel-gvt-dev@lists.freedesktop.org" , "kvm@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "pbonzini@redhat.com" , "kevin.tian@intel.com" , "peterx@redhat.com" Subject: Re: [PATCH v2 1/2] vfio: introduce vfio_dma_rw to read/write a range of IOVAs Message-ID: <20200115205827.2249201c@x1.home> In-Reply-To: <7528cfff-2512-538e-4e44-85f0a0b0130a@nextfour.com> References: <20200115034132.2753-1-yan.y.zhao@intel.com> <20200115035303.12362-1-yan.y.zhao@intel.com> <20200115130638.6926dd08@w520.home> <80cf3888-2e51-3fd7-a064-213e7ded188e@nextfour.com> <20200115195959.28f33078@x1.home> <7528cfff-2512-538e-4e44-85f0a0b0130a@nextfour.com> Organization: Red Hat MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 16 Jan 2020 03:15:58 +0000 Mika Penttil=C3=A4 wrote: > On 16.1.2020 4.59, Alex Williamson wrote: > > On Thu, 16 Jan 2020 02:30:52 +0000 > > Mika Penttil=C3=A4 wrote: > > =20 > >> On 15.1.2020 22.06, Alex Williamson wrote: =20 > >>> On Tue, 14 Jan 2020 22:53:03 -0500 > >>> Yan Zhao wrote: > >>> =20 > >>>> vfio_dma_rw will read/write a range of user space memory pointed to = by > >>>> IOVA into/from a kernel buffer without pinning the user space memory. > >>>> > >>>> TODO: mark the IOVAs to user space memory dirty if they are written = in > >>>> vfio_dma_rw(). > >>>> > >>>> Cc: Kevin Tian > >>>> Signed-off-by: Yan Zhao > >>>> --- > >>>> drivers/vfio/vfio.c | 45 +++++++++++++++++++ > >>>> drivers/vfio/vfio_iommu_type1.c | 76 ++++++++++++++++++++++++++++= +++++ > >>>> include/linux/vfio.h | 5 +++ > >>>> 3 files changed, 126 insertions(+) > >>>> > >>>> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c > >>>> index c8482624ca34..8bd52bc841cf 100644 > >>>> --- a/drivers/vfio/vfio.c > >>>> +++ b/drivers/vfio/vfio.c > >>>> @@ -1961,6 +1961,51 @@ int vfio_unpin_pages(struct device *dev, unsi= gned long *user_pfn, int npage) > >>>> } > >>>> EXPORT_SYMBOL(vfio_unpin_pages); > >>>> =20 > >>>> +/* > >>>> + * Read/Write a range of IOVAs pointing to user space memory into/f= rom a kernel > >>>> + * buffer without pinning the user space memory > >>>> + * @dev [in] : device > >>>> + * @iova [in] : base IOVA of a user space buffer > >>>> + * @data [in] : pointer to kernel buffer > >>>> + * @len [in] : kernel buffer length > >>>> + * @write : indicate read or write > >>>> + * Return error code on failure or 0 on success. > >>>> + */ > >>>> +int vfio_dma_rw(struct device *dev, dma_addr_t iova, void *data, > >>>> + size_t len, bool write) > >>>> +{ > >>>> + struct vfio_container *container; > >>>> + struct vfio_group *group; > >>>> + struct vfio_iommu_driver *driver; > >>>> + int ret =3D 0; =20 > >> Do you know the iova given to vfio_dma_rw() is indeed a gpa and not io= va > >> from a iommu mapping? So isn't it you actually assume all the guest is > >> pinned, > >> like from device assignment? > >> > >> Or who and how is the vfio mapping added before the vfio_dma_rw() ? =20 > > vfio only knows about IOVAs, not GPAs. It's possible that IOVAs are > > identity mapped to the GPA space, but a VM with a vIOMMU would quickly > > break any such assumption. Pinning is also not required. This access > > is via the CPU, not the I/O device, so we don't require the memory to > > be pinning and it potentially won't be for a non-IOMMU backed mediated > > device. The intention here is that via the mediation of an mdev > > device, a vendor driver would already know IOVA ranges for the device > > to access via the guest driver programming of the device. Thanks, > > > > Alex =20 >=20 > Thanks Alex... you mean IOVA is in the case of iommu already a=20 > iommu-translated address to a user space VA in VM host space? The user (QEMU in the case of device assignment) performs ioctls to map user VAs to IOVAs for the device. With IOMMU backing the VAs are pinned to get HPA and the IOVA to HPA mappings are programmed into the IOMMU. Thus the device accesses the IOVA to get to the HPA, which is the backing for the VA. In this case we're simply using the IOVA to lookup the VA and access it with the CPU directly. The IOMMU isn't involved, but we're still performing an access as if we were the device doing a DMA. Let me know if that doesn't answer your question. > How does it get to hold on that? What piece of meditation is responsible= =20 > for this? It's device specific. The mdev vendor driver is mediating a specific hardware device where user accesses to MMIO on the device configures DMA targets. The mediation needs to trap those accesses in order to pin page and program the real hardware with real physical addresses (be they HPA or host-IOVAs depending on the host IOMMU config) to perform those DMAs. For cases where the CPU might choose to perform some sort of virtual DMA on behalf of the device itself, this interface would be used. Thanks, Alex