Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp1077222pxa; Wed, 12 Aug 2020 23:03:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxmrwS2Aegph2wZ5le4ZdBPVTmoamiYK91UIrXN77aYeOnKdZJwuiXFN6kfFAACkAIfl+1/ X-Received: by 2002:a50:9fe6:: with SMTP id c93mr3107437edf.286.1597298614491; Wed, 12 Aug 2020 23:03:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597298614; cv=none; d=google.com; s=arc-20160816; b=M/ym1phWuuPvt08faaG+atOoOvt9mQFFn2g/yFa/spWvnaQjDWzgSfDaVShUZ3maRR eoG3UbIQUPWgJuQR5QHsM7gWQBnsYLV8oFoG7/P+p+/AIUmj+f9zrgKAEbUPMRAcyq/e /vfKognbAN2J32m66fWH3CVsax7WueMvEAmboTmYW4NsWv/DUOhDO3rLtj4g5nJuCnal +S5Ph8i+rkjmbyOfIR84lC9RD1mf45xTlnJJSOlkQSQL4I13RdtQGt+lVFzlCpuXRufz x2GXb6noHKzHkIz2GOViXqPljNUuJpln2tulAHdKUdod7FFFE2y3h/KBRiaos2ZysHqu ffvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature; bh=vY8vnehO56XjlQrEEeygaT/MkeY3wULexFdzVU37Xfc=; b=iNkBeWyd8/FeP39Ok/u6y0a62x0pGaXDFJapMPhJeWjNPLBF3ex63qULhFsqC23T4v fWJBYjsltKNV0cALgQ9t+x7epi3zhKbyHpms6nSapGcrevb44/4PJWxl+ZeH3N8eFbm7 mby54V77rCxn+L7WoIsuzMVJsSThH9osdTJOr+3UJQLsO3+eDxe8AqG74L04Dm3a7UD4 pKyNYy5F/dxSkrBzIYE0ChT6nB9PEpNI81HlmiZWmVLm41lK/ptvO4QI1iKIm5kjMD+u CpLxmNxispFeSQZtryp58vCRE+UA2BxWYIJZu6vHaX0GgpG10EMq4ZZzuoPJVBDxbLzr CPqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=SBn3svWx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id jo4si2496283ejb.455.2020.08.12.23.03.11; Wed, 12 Aug 2020 23:03:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=SBn3svWx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726044AbgHMGCf (ORCPT + 99 others); Thu, 13 Aug 2020 02:02:35 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:26929 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725954AbgHMGCf (ORCPT ); Thu, 13 Aug 2020 02:02:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597298553; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vY8vnehO56XjlQrEEeygaT/MkeY3wULexFdzVU37Xfc=; b=SBn3svWxWbOCBrlMr13GaI7flx1Z0hcyrjttXFSWjvnzWMXrS6HMtNYGNFNt5sC/y318jV DTCquGERuMUZ2UBBI3YWRI1iWGBj2UxBmndnPyo3mEemTLKGyDWk7DYuTSSGkiwM2iJQo/ 3SGG+0mWn/Wk8uJD+R2zc/ZW+QZGUUo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-84-iMb9FvaNOg2PnYpJQ4BiPA-1; Thu, 13 Aug 2020 02:02:31 -0400 X-MC-Unique: iMb9FvaNOg2PnYpJQ4BiPA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E3A591005504; Thu, 13 Aug 2020 06:02:27 +0000 (UTC) Received: from [10.72.13.44] (ovpn-13-44.pek2.redhat.com [10.72.13.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id A26415D9E8; Thu, 13 Aug 2020 06:02:00 +0000 (UTC) Subject: Re: [PATCH RFC v2 00/18] Add VFIO mediated device support and DEV-MSI support for the idxd driver To: "Tian, Kevin" , Jason Gunthorpe , Alex Williamson Cc: "Jiang, Dave" , "vkoul@kernel.org" , "Dey, Megha" , "maz@kernel.org" , "bhelgaas@google.com" , "rafael@kernel.org" , "gregkh@linuxfoundation.org" , "tglx@linutronix.de" , "hpa@zytor.com" , "Pan, Jacob jun" , "Raj, Ashok" , "Liu, Yi L" , "Lu, Baolu" , "Kumar, Sanjay K" , "Luck, Tony" , "Lin, Jing" , "Williams, Dan J" , "kwankhede@nvidia.com" , "eric.auger@redhat.com" , "parav@mellanox.com" , "Hansen, Dave" , "netanelg@mellanox.com" , "shahafs@mellanox.com" , "yan.y.zhao@linux.intel.com" , "pbonzini@redhat.com" , "Ortiz, Samuel" , "Hossain, Mona" , "dmaengine@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "x86@kernel.org" , "linux-pci@vger.kernel.org" , "kvm@vger.kernel.org" References: <159534667974.28840.2045034360240786644.stgit@djiang5-desk3.ch.intel.com> <20200721164527.GD2021248@mellanox.com> <20200724001930.GS2021248@mellanox.com> <20200805192258.5ee7a05b@x1.home> <20200807121955.GS16789@nvidia.com> From: Jason Wang Message-ID: Date: Thu, 13 Aug 2020 14:01:58 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/8/13 下午1:26, Tian, Kevin wrote: >> From: Jason Wang >> Sent: Thursday, August 13, 2020 12:34 PM >> >> >> On 2020/8/12 下午12:05, Tian, Kevin wrote: >>>> The problem is that if we tie all controls via VFIO uAPI, the other >>>> subsystem like vDPA is likely to duplicate them. I wonder if there is a >>>> way to decouple the vSVA out of VFIO uAPI? >>> vSVA is a per-device (either pdev or mdev) feature thus naturally should >>> be managed by its device driver (VFIO or vDPA). From this angle some >>> duplication is inevitable given VFIO and vDPA are orthogonal passthrough >>> frameworks. Within the kernel the majority of vSVA handling is done by >>> IOMMU and IOASID modules thus most logic are shared. >> >> So why not introduce vSVA uAPI at IOMMU or IOASID layer? > One may ask a similar question why IOMMU doesn't expose map/unmap > as uAPI... I think this is probably a good idea as well. If there's anything missed in the infrastructure, we can invent. Besides vhost-vDPA, there are other subsystems that relaying their uAPI to IOMMU API. Duplicating uAPIs is usually a hint of the codes duplication. Simple map/unmap could be easy but vSVA uAPI is much more complicated. > >> >>>>> If an userspace DMA interface can be easily >>>>> adapted to be a passthrough one, it might be the choice. >>>> It's not that easy even for VFIO which requires a lot of new uAPIs and >>>> infrastructures(e.g mdev) to be invented. >>>> >>>> >>>>> But for idxd, >>>>> we see mdev a much better fit here, given the big difference between >>>>> what userspace DMA requires and what guest driver requires in this hw. >>>> A weak point for mdev is that it can't serve kernel subsystem other than >>>> VFIO. In this case, you need some other infrastructures (like [1]) to do >>>> this. >>> mdev is not exclusive from kernel usages. It's perfectly fine for a driver >>> to reserve some work queues for host usages, while wrapping others >>> into mdevs. >> >> I meant you may want slices to be an independent device from the kernel >> point of view: >> >> E.g for ethernet devices, you may want 10K mdevs to be passed to guest. >> >> Similarly, you may want 10K net devices which is connected to the kernel >> networking subsystems. >> >> In this case it's not simply reserving queues but you need some other >> type of device abstraction. There could be some kind of duplication >> between this and mdev. >> > yes, some abstraction required but isn't it what the driver should > care about instead of mdev framework itself? With mdev you present a "PCI" device, but what's kind of device it tries to present to kernel? If it's still PCI, there's duplication with mdev, if it's something new, maybe we can switch to that API. > If the driver reports > the same set of resource to both mdev and networking, it needs to > make sure when the resource is claimed in one interface then it > should be marked in-use in another. e.g. each mdev includes a > available_intances attribute. the driver could report 10k available > instances initially and then update it to 5K when another 5K is used > for net devices later. Right but this probably means you need another management layer under mdev. > > Mdev definitely has its usage limitations. Some may be improved > in the future, some may not. But those are distracting from the > original purpose of this thread (mdev vs. userspace DMA) and better > be discussed in other places e.g. LPC... Ok. Thanks > > Thanks > Kevin