Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp586552pxa; Tue, 11 Aug 2020 10:03:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyPOoBpMopa1MCctE1OduTXd6i+bmW+cPV1WSJuBJEHBnSkbo47SQqHcmWEO+8z6cIdO8GH X-Received: by 2002:a17:906:7e0b:: with SMTP id e11mr26763416ejr.540.1597165391280; Tue, 11 Aug 2020 10:03:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597165391; cv=none; d=google.com; s=arc-20160816; b=DWyRgTVVQh9FA+wpdecYnln5vLMd5SY4i1B+iAAY6f4LqUo7+Sqv+oXFNwj7Wmgk1s xrFmt0tgwj6TqaCpr+f/SbgJw8KLJHyJ2tkM6e3+Q1xnPMRAqO78LXy+nR3lkcD9nYUC XAs9Ib3YKdgAzbBRzlyD8/2EqenmbXX6miRfiHejgOOVwTQM1DNxLFWmEhjpmV5lPfpt Gv2BsZVDgTXD6u8Oat12tp9ECehpgKya9oBEvST2SEfEy6M31EMb7Xvbustwvaa5BRqV 7IuFmsbeT5+p2uP38ytASuzO3np1vnLANC7++gVSVLNhROd9hyb7H4EK95lyq4b/BeQ+ c8ng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :organization:references:in-reply-to:message-id:subject:cc:to:from :date:dkim-signature; bh=8f8+cJ+l1fMhcVKWX+3dHLkMxd5ok8TAOd+bF0aoOT0=; b=sZTDa/blmHg68yi0tHskjk8FifVTJxAI8P4EyS/rVa0gnulqwOHal2TrbWdNF+MNt9 8UK6KQlnw8xhyjk6wxyS2FbF6hQJSfRcqano7n/n5pBGnkpTAzEFq84W4nX+Ky93buXX VAqeyx3ujetqv5s3fFqQi0cb+GrfVxlSWBONh5cBtWafw8kY88jdDME33dUTKeUb8HBx n/9turh6F5sSj1f+oYHawnbDGzsBOX8agJACr6o6cRHDZbFZZwVdjh3eEQAb/sGbczM6 qVqkgroh5GLTp2fawXucrxO7X9XIiwYnDaU10BJcrOrHeMGSREUXHZXBBzAOiJULVCXP zswQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RxKe3E2c; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bv7si669232ejb.523.2020.08.11.10.02.47; Tue, 11 Aug 2020 10:03:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RxKe3E2c; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729206AbgHKRBx (ORCPT + 99 others); Tue, 11 Aug 2020 13:01:53 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:55546 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729232AbgHKRAz (ORCPT ); Tue, 11 Aug 2020 13:00:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597165252; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8f8+cJ+l1fMhcVKWX+3dHLkMxd5ok8TAOd+bF0aoOT0=; b=RxKe3E2cDYM4w0fyv7S6vcU/0UFfXoG8C8750pvRnchvIhPocR0v83ZGZwXzF9qIAGoRzl 0Th6OtoSqwf9jQLS//1DYIJV13NBbEegLvuPeZpe6tAlUFMukWk6Kb2ErRpCeWuQ2+UoaZ iHdE03t40Pm4Tjr4Fq4nWrakdjBePz8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-182-N10_X-MaNOazefTjWZZngg-1; Tue, 11 Aug 2020 13:00:47 -0400 X-MC-Unique: N10_X-MaNOazefTjWZZngg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 698498015FC; Tue, 11 Aug 2020 17:00:42 +0000 (UTC) Received: from x1.home (ovpn-112-71.phx2.redhat.com [10.3.112.71]) by smtp.corp.redhat.com (Postfix) with ESMTP id E35812DE99; Tue, 11 Aug 2020 17:00:36 +0000 (UTC) Date: Tue, 11 Aug 2020 11:00:36 -0600 From: Alex Williamson To: "Tian, Kevin" , Jason Gunthorpe Cc: "Jiang, Dave" , "vkoul@kernel.org" , "Dey, Megha" , "maz@kernel.org" , "bhelgaas@google.com" , "rafael@kernel.org" , "gregkh@linuxfoundation.org" , "tglx@linutronix.de" , "hpa@zytor.com" , "Pan, Jacob jun" , "Raj, Ashok" , "Liu, Yi L" , "Lu, Baolu" , "Kumar, Sanjay K" , "Luck, Tony" , "Lin, Jing" , "Williams, Dan J" , "kwankhede@nvidia.com" , "eric.auger@redhat.com" , "parav@mellanox.com" , "Hansen, Dave" , "netanelg@mellanox.com" , "shahafs@mellanox.com" , "yan.y.zhao@linux.intel.com" , "pbonzini@redhat.com" , "Ortiz, Samuel" , "Hossain, Mona" , "dmaengine@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "x86@kernel.org" , "linux-pci@vger.kernel.org" , "kvm@vger.kernel.org" Subject: Re: [PATCH RFC v2 00/18] Add VFIO mediated device support and DEV-MSI support for the idxd driver Message-ID: <20200811110036.7d337837@x1.home> In-Reply-To: References: <159534667974.28840.2045034360240786644.stgit@djiang5-desk3.ch.intel.com> <20200721164527.GD2021248@mellanox.com> <20200724001930.GS2021248@mellanox.com> <20200805192258.5ee7a05b@x1.home> <20200807121955.GS16789@nvidia.com> Organization: Red Hat MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 10 Aug 2020 07:32:24 +0000 "Tian, Kevin" wrote: > > From: Jason Gunthorpe > > Sent: Friday, August 7, 2020 8:20 PM > > > > On Wed, Aug 05, 2020 at 07:22:58PM -0600, Alex Williamson wrote: > > > > > If you see this as an abuse of the framework, then let's identify those > > > specific issues and come up with a better approach. As we've discussed > > > before, things like basic PCI config space emulation are acceptable > > > overhead and low risk (imo) and some degree of register emulation is > > > well within the territory of an mdev driver. > > > > What troubles me is that idxd already has a direct userspace interface > > to its HW, and does userspace DMA. The purpose of this mdev is to > > provide a second direct userspace interface that is a little different > > and trivially plugs into the virtualization stack. > > No. Userspace DMA and subdevice passthrough (what mdev provides) > are two distinct usages IMO (at least in idxd context). and this might > be the main divergence between us, thus let me put more words here. > If we could reach consensus in this matter, which direction to go > would be clearer. > > First, a passthrough interface requires some unique requirements > which are not commonly observed in an userspace DMA interface, e.g.: > > - Tracking DMA dirty pages for live migration; > - A set of interfaces for using SVA inside guest; > * PASID allocation/free (on some platforms); > * bind/unbind guest mm/page table (nested translation); > * invalidate IOMMU cache/iotlb for guest page table changes; > * report page request from device to guest; > * forward page response from guest to device; > - Configuring irqbypass for posted interrupt; > - ... > > Second, a passthrough interface requires delegating raw controllability > of subdevice to guest driver, while the same delegation might not be > required for implementing an userspace DMA interface (especially for > modern devices which support SVA). For example, idxd allows following > setting per wq (guest driver may configure them in any combination): > - put in dedicated or shared mode; > - enable/disable SVA; > - Associate guest-provided PASID to MSI/IMS entry; > - set threshold; > - allow/deny privileged access; > - allocate/free interrupt handle (enlightened for guest); > - collect error status; > - ... > > We plan to support idxd userspace DMA with SVA. The driver just needs > to prepare a wq with a predefined configuration (e.g. shared, SVA, > etc.), bind the process mm to IOMMU (non-nested) and then map > the portal to userspace. The goal that userspace can do DMA to > associated wq doesn't change the fact that the wq is still *owned* > and *controlled* by kernel driver. However as far as passthrough > is concerned, the wq is considered 'owned' by the guest driver thus > we need an interface which can support low-level *controllability* > from guest driver. It is sort of a mess in uAPI when mixing the > two together. > > Based on above two reasons, we see distinct requirements between > userspace DMA and passthrough interfaces, at least in idxd context > (though other devices may have less distinction in-between). Therefore, > we didn't see the value/necessity of reinventing the wheel that mdev > already handles well to evolve an simple application-oriented usespace > DMA interface to a complex guest-driver-oriented passthrough interface. > The complexity of doing so would incur far more kernel-side changes > than the portion of emulation code that you've been concerned about... > > > > > I don't think VFIO should be the only entry point to > > virtualization. If we say the universe of devices doing user space DMA > > must also implement a VFIO mdev to plug into virtualization then it > > will be alot of mdevs. > > Certainly VFIO will not be the only entry point. and This has to be a > case-by-case decision. If an userspace DMA interface can be easily > adapted to be a passthrough one, it might be the choice. But for idxd, > we see mdev a much better fit here, given the big difference between > what userspace DMA requires and what guest driver requires in this hw. > > > > > I would prefer to see that the existing userspace interface have the > > extra needed bits for virtualization (eg by having appropriate > > internal kernel APIs to make this easy) and all the emulation to build > > the synthetic PCI device be done in userspace. > > In the end what decides the direction is the amount of changes that > we have to put in kernel, not whether we call it 'emulation'. For idxd, > adding special passthrough requirements (guest SVA, dirty tracking, > etc.) and raw controllability to the simple userspace DMA interface > is for sure making kernel more complex than reusing the mdev > framework (plus some degree of emulation mockup behind). Not to > mention the merit of uAPI compatibility with mdev... I agree with a lot of this argument, exposing a device through a userspace interface versus allowing user access to a device through a userspace interface are different levels of abstraction and control. In an ideal world, perhaps we could compose one from the other, but I don't think the existence of one is proof that the other is redundant. That's not to say that mdev/vfio isn't ripe for abuse in this space, but I'm afraid the test for that abuse is probably much more subtle. I'll also remind folks that LPC is coming up in just a couple short weeks and this might be something we should discuss (virtually) in-person. uconf CfPs are currently open. Thanks, Alex