Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754328AbbKTFwT (ORCPT ); Fri, 20 Nov 2015 00:52:19 -0500 Received: from mga09.intel.com ([134.134.136.24]:53959 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751923AbbKTFwR (ORCPT ); Fri, 20 Nov 2015 00:52:17 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,321,1444719600"; d="scan'208";a="843048358" Message-ID: <564EB4F2.9080605@intel.com> Date: Fri, 20 Nov 2015 13:51:46 +0800 From: Jike Song User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: Alex Williamson CC: Stefano Stabellini , "Tian, Kevin" , "xen-devel@lists.xen.org" , "igvt-g@ml01.01.org" , "intel-gfx@lists.freedesktop.org" , "linux-kernel@vger.kernel.org" , "White, Michael L" , "Dong, Eddie" , "Li, Susie" , "Cowperthwaite, David J" , "Reddy, Raghuveer" , "Zhu, Libo" , "Zhou, Chao" , "Wang, Hongbo" , "Lv, Zhiyuan" , qemu-devel , Paolo Bonzini , Gerd Hoffmann Subject: Re: [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel References: <53D215D3.50608@intel.com> <547FCAAD.2060406@intel.com> <54AF967B.3060503@intel.com> <5527CEC4.9080700@intel.com> <559B3E38.1080707@intel.com> <562F4311.9@intel.com> <1447870341.4697.92.camel@redhat.com> <564D78D0.80904@intel.com> <1447948366.4697.119.camel@redhat.com> <564E8C51.6070706@intel.com> <1447993371.4697.257.camel@redhat.com> In-Reply-To: <1447993371.4697.257.camel@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4514 Lines: 88 On 11/20/2015 12:22 PM, Alex Williamson wrote: > On Fri, 2015-11-20 at 10:58 +0800, Jike Song wrote: >> On 11/19/2015 11:52 PM, Alex Williamson wrote: >>> On Thu, 2015-11-19 at 15:32 +0000, Stefano Stabellini wrote: >>>> On Thu, 19 Nov 2015, Jike Song wrote: >>>>> Hi Alex, thanks for the discussion. >>>>> >>>>> In addition to Kevin's replies, I have a high-level question: can VFIO >>>>> be used by QEMU for both KVM and Xen? >>>> >>>> No. VFIO cannot be used with Xen today. When running on Xen, the IOMMU >>>> is owned by Xen. >>> >>> Right, but in this case we're talking about device MMUs, which are owned >>> by the device driver which I think is running in dom0, right? This >>> proposal doesn't require support of the system IOMMU, the dom0 driver >>> maps IOVA translations just as it would for itself. We're largely >>> proposing use of the VFIO API to provide a common interface to expose a >>> PCI(e) device to QEMU, but what happens in the vGPU vendor device and >>> IOMMU backends is specific to the device and perhaps even specific to >>> the hypervisor. Thanks, >> >> Let me conclude this, and please correct me in case of any misread: the >> vGPU interface between kernel and QEMU will be through VFIO, with a new >> VFIO backend (instead of the existing type1), for both KVMGT and XenGT? > > My primary concern is KVM and QEMU upstream, the proposal is not > specifically directed at XenGT, but does not exclude it either. Xen is > welcome to adopt this proposal as well, it simply defines the channel > through which vGPUs are exposed to QEMU as the VFIO API. The core VFIO > code in the Linux kernel is just as available for use in Xen dom0 as it > is for a KVM host. VFIO in QEMU certainly knows about some > accelerations for KVM, but these are almost entirely around allowing > eventfd based interrupts to be injected through KVM, which is something > I'm sure Xen could provide as well. These accelerations are also not > required, VFIO based device assignment in QEMU works with or without > KVM. Likewise, the VFIO kernel interface knows nothing about KVM and > has no dependencies on it. > > There are two components to the VFIO API, one is the type1 compliant > IOMMU interface, which for this proposal is really doing nothing more > than tracking the HVA to GPA mappings for the VM. This much seems > entirely common regardless of the hypervisor. The other part is the > device interface. The lifecycle of the virtual device seems like it > would be entirely shared, as does much of the emulation components of > the device. When we get to pinning pages, providing direct access to > memory ranges for a VM, and accelerating interrupts, the vGPU drivers > will likely need some per hypervisor branches, but these are areas where > that's true no matter what the interface. I'm probably over > simplifying, but hopefully not too much, correct me if I'm wrong. > Thanks for confirmation. For QEMU/KVM, I totally agree your point; However, if we take XenGT to consider, it will be a bit more complex: with Xen hypervisor and Dom0 kernel running in different level, it's not a straight- forward way for QEMU to do something like mapping a portion of MMIO BAR via VFIO in Dom0 kernel, instead of calling hypercalls directly. I don't know if there is a better way to handle this. But I do agree that channels between kernel and Qemu via VFIO is a good idea, even though we may have to split KVMGT/XenGT in Qemu a bit. We are currently working on moving all of PCI CFG emulation from kernel to Qemu, hopefully we can release it by end of this year and work with you guys to adjust it for the agreed method. > The benefit of course is that aside from some extensions to the API, the > QEMU components are already in place and there's a lot more leverage for > getting both QEMU and libvirt support upstream in being able to support > multiple vendors, perhaps multiple hypervisors, with the same code. > Also, I'm not sure how useful it is, but VFIO is a userspace driver > interface, where here we're predominantly talking about that userspace > driver being QEMU. It's not limited to that though. A userspace > compute application could have direct access to a vGPU through this > model. Thanks, > > Alex > -- Thanks, Jike -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/