Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759271AbbKTEW4 (ORCPT ); Thu, 19 Nov 2015 23:22:56 -0500 Received: from mx1.redhat.com ([209.132.183.28]:52743 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757944AbbKTEWz (ORCPT ); Thu, 19 Nov 2015 23:22:55 -0500 Message-ID: <1447993371.4697.257.camel@redhat.com> Subject: Re: [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel From: Alex Williamson To: Jike Song Cc: Stefano Stabellini , "Tian, Kevin" , "xen-devel@lists.xen.org" , "igvt-g@ml01.01.org" , "intel-gfx@lists.freedesktop.org" , "linux-kernel@vger.kernel.org" , "White, Michael L" , "Dong, Eddie" , "Li, Susie" , "Cowperthwaite, David J" , "Reddy, Raghuveer" , "Zhu, Libo" , "Zhou, Chao" , "Wang, Hongbo" , "Lv, Zhiyuan" , qemu-devel , Paolo Bonzini , Gerd Hoffmann Date: Thu, 19 Nov 2015 21:22:51 -0700 In-Reply-To: <564E8C51.6070706@intel.com> References: <53D215D3.50608@intel.com> <547FCAAD.2060406@intel.com> <54AF967B.3060503@intel.com> <5527CEC4.9080700@intel.com> <559B3E38.1080707@intel.com> <562F4311.9@intel.com> <1447870341.4697.92.camel@redhat.com> <564D78D0.80904@intel.com> <1447948366.4697.119.camel@redhat.com> <564E8C51.6070706@intel.com> Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3629 Lines: 67 On Fri, 2015-11-20 at 10:58 +0800, Jike Song wrote: > On 11/19/2015 11:52 PM, Alex Williamson wrote: > > On Thu, 2015-11-19 at 15:32 +0000, Stefano Stabellini wrote: > >> On Thu, 19 Nov 2015, Jike Song wrote: > >>> Hi Alex, thanks for the discussion. > >>> > >>> In addition to Kevin's replies, I have a high-level question: can VFIO > >>> be used by QEMU for both KVM and Xen? > >> > >> No. VFIO cannot be used with Xen today. When running on Xen, the IOMMU > >> is owned by Xen. > > > > Right, but in this case we're talking about device MMUs, which are owned > > by the device driver which I think is running in dom0, right? This > > proposal doesn't require support of the system IOMMU, the dom0 driver > > maps IOVA translations just as it would for itself. We're largely > > proposing use of the VFIO API to provide a common interface to expose a > > PCI(e) device to QEMU, but what happens in the vGPU vendor device and > > IOMMU backends is specific to the device and perhaps even specific to > > the hypervisor. Thanks, > > Let me conclude this, and please correct me in case of any misread: the > vGPU interface between kernel and QEMU will be through VFIO, with a new > VFIO backend (instead of the existing type1), for both KVMGT and XenGT? My primary concern is KVM and QEMU upstream, the proposal is not specifically directed at XenGT, but does not exclude it either. Xen is welcome to adopt this proposal as well, it simply defines the channel through which vGPUs are exposed to QEMU as the VFIO API. The core VFIO code in the Linux kernel is just as available for use in Xen dom0 as it is for a KVM host. VFIO in QEMU certainly knows about some accelerations for KVM, but these are almost entirely around allowing eventfd based interrupts to be injected through KVM, which is something I'm sure Xen could provide as well. These accelerations are also not required, VFIO based device assignment in QEMU works with or without KVM. Likewise, the VFIO kernel interface knows nothing about KVM and has no dependencies on it. There are two components to the VFIO API, one is the type1 compliant IOMMU interface, which for this proposal is really doing nothing more than tracking the HVA to GPA mappings for the VM. This much seems entirely common regardless of the hypervisor. The other part is the device interface. The lifecycle of the virtual device seems like it would be entirely shared, as does much of the emulation components of the device. When we get to pinning pages, providing direct access to memory ranges for a VM, and accelerating interrupts, the vGPU drivers will likely need some per hypervisor branches, but these are areas where that's true no matter what the interface. I'm probably over simplifying, but hopefully not too much, correct me if I'm wrong. The benefit of course is that aside from some extensions to the API, the QEMU components are already in place and there's a lot more leverage for getting both QEMU and libvirt support upstream in being able to support multiple vendors, perhaps multiple hypervisors, with the same code. Also, I'm not sure how useful it is, but VFIO is a userspace driver interface, where here we're predominantly talking about that userspace driver being QEMU. It's not limited to that though. A userspace compute application could have direct access to a vGPU through this model. Thanks, Alex -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/