Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp539601imm; Thu, 6 Sep 2018 06:33:43 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYvgmhY6nklwI1ZY70jzmxb5BU30TLTgVh5NcBc1qYJLx4cFMxfmkHBld/Mzx6pjigbXArH X-Received: by 2002:a62:aa02:: with SMTP id e2-v6mr2775791pff.211.1536240823485; Thu, 06 Sep 2018 06:33:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536240823; cv=none; d=google.com; s=arc-20160816; b=KTQjeNfDAQjkap/RKQgd9Je0SFDrG/G0/5lum39BeAfZ9wQpamPbvcZtvZABT9Y9R6 OcZpG3tAu1J0jBlCtk9Jaq7p8kEcUPmCbkvD+ZGczd1u538HX7yMO5oo+EhP+Z+X+U6U R/VGyID0IkSQsCYzAho2FCGk5RFkhBkeGx36UTwHeW1UDl7/nuiH0abyP9iSKSyvPGu4 GroZ8aZ6LniL754ekBXmBaI2NqMrjJI9v7UkWZfC65PuMdXHbJoJJxxoG7Umiodrffq6 Va9b4aYRENT9svRdNIbWY2Lml67M87YHG9nHae9pqwieg6a8vT2mTwhGI39NTpVXTWQx xuRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=+ZVWrCFzkbk3KH5rmmVGCafSYfZpNEP2j/xp45gTaSo=; b=IayjrwC1M//wzYT0GuvSy5WU0yZj8tejhD2DRUw76S+6CFLs/ikQuW/GZH7NaQHdEu h7ltZsXL/7Lxns/KkrOdlP9ZKJ6vT/rYGLOAbUpuUx5YEX1HD6iS4Y6WBBytQUjrOhGf 8KgiCdy/1yaEGf4ObDE7s99ZcXMEhJ1YpnqKL+K7mXUQEVcTCQOyCd6B0HxwEumbto+A YMyoQmoGxDb4bUS+68iacTEGhQCuOZuD7VwaUPQEIkn+NqkL8KY1BsJKseYLcEEjpBbn aGEMdl1KMFn5/rb4VMcxVkwNhMiyy3q/8qxr6PZxsNKiNhWxPPok+cHV7fiikZY9ynSI QM9w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b32-v6si3076925pla.66.2018.09.06.06.33.27; Thu, 06 Sep 2018 06:33:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729549AbeIFSHQ (ORCPT + 99 others); Thu, 6 Sep 2018 14:07:16 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:41642 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728776AbeIFSHQ (ORCPT ); Thu, 6 Sep 2018 14:07:16 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 50FD340216F6; Thu, 6 Sep 2018 13:31:41 +0000 (UTC) Received: from redhat.com (ovpn-123-203.rdu2.redhat.com [10.10.123.203]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2D8132027EA0; Thu, 6 Sep 2018 13:31:35 +0000 (UTC) Date: Thu, 6 Sep 2018 09:31:33 -0400 From: Jerome Glisse To: Kenneth Lee Cc: Alex Williamson , Kenneth Lee , Jonathan Corbet , Herbert Xu , "David S . Miller" , Joerg Roedel , Hao Fang , Zhou Wang , Zaibo Xu , Philippe Ombredanne , Greg Kroah-Hartman , Thomas Gleixner , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, iommu@lists.linux-foundation.org, kvm@vger.kernel.org, linux-accelerators@lists.ozlabs.org, Lu Baolu , Sanjay Kumar , linuxarm@huawei.com Subject: Re: [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive Message-ID: <20180906133133.GA3830@redhat.com> References: <20180903005204.26041-1-nek.in.cn@gmail.com> <20180904150019.GA4024@redhat.com> <20180904101509.62314b67@t450s.home> <20180906094532.GG230707@Turing-Arch-b> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20180906094532.GG230707@Turing-Arch-b> User-Agent: Mutt/1.10.0 (2018-05-17) X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Thu, 06 Sep 2018 13:31:41 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Thu, 06 Sep 2018 13:31:41 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'jglisse@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 06, 2018 at 05:45:32PM +0800, Kenneth Lee wrote: > On Tue, Sep 04, 2018 at 10:15:09AM -0600, Alex Williamson wrote: > > Date: Tue, 4 Sep 2018 10:15:09 -0600 > > From: Alex Williamson > > To: Jerome Glisse > > CC: Kenneth Lee , Jonathan Corbet , > > Herbert Xu , "David S . Miller" > > , Joerg Roedel , Kenneth Lee > > , Hao Fang , Zhou Wang > > , Zaibo Xu , Philippe > > Ombredanne , Greg Kroah-Hartman > > , Thomas Gleixner , > > linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, > > linux-crypto@vger.kernel.org, iommu@lists.linux-foundation.org, > > kvm@vger.kernel.org, linux-accelerators@lists.ozlabs.org, Lu Baolu > > , Sanjay Kumar , > > linuxarm@huawei.com > > Subject: Re: [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive > > Message-ID: <20180904101509.62314b67@t450s.home> > > > > On Tue, 4 Sep 2018 11:00:19 -0400 > > Jerome Glisse wrote: > > > > > On Mon, Sep 03, 2018 at 08:51:57AM +0800, Kenneth Lee wrote: > > > > From: Kenneth Lee > > > > > > > > WarpDrive is an accelerator framework to expose the hardware capabilities > > > > directly to the user space. It makes use of the exist vfio and vfio-mdev > > > > facilities. So the user application can send request and DMA to the > > > > hardware without interaction with the kernel. This removes the latency > > > > of syscall. > > > > > > > > WarpDrive is the name for the whole framework. The component in kernel > > > > is called SDMDEV, Share Domain Mediated Device. Driver driver exposes its > > > > hardware resource by registering to SDMDEV as a VFIO-Mdev. So the user > > > > library of WarpDrive can access it via VFIO interface. > > > > > > > > The patchset contains document for the detail. Please refer to it for more > > > > information. > > > > > > > > This patchset is intended to be used with Jean Philippe Brucker's SVA > > > > patch [1], which enables not only IO side page fault, but also PASID > > > > support to IOMMU and VFIO. > > > > > > > > With these features, WarpDrive can support non-pinned memory and > > > > multi-process in the same accelerator device. We tested it in our SoC > > > > integrated Accelerator (board ID: D06, Chip ID: HIP08). A reference work > > > > tree can be found here: [2]. > > > > > > > > But it is not mandatory. This patchset is tested in the latest mainline > > > > kernel without the SVA patches. So it supports only one process for each > > > > accelerator. > > > > > > > > We have noticed the IOMMU aware mdev RFC announced recently [3]. > > > > > > > > The IOMMU aware mdev has similar idea but different intention comparing to > > > > WarpDrive. It intends to dedicate part of the hardware resource to a VM. > > > > And the design is supposed to be used with Scalable I/O Virtualization. > > > > While sdmdev is intended to share the hardware resource with a big amount > > > > of processes. It just requires the hardware supporting address > > > > translation per process (PCIE's PASID or ARM SMMU's substream ID). > > > > > > > > But we don't see serious confliction on both design. We believe they can be > > > > normalized as one. > > > > > > > > > > So once again i do not understand why you are trying to do things > > > this way. Kernel already have tons of example of everything you > > > want to do without a new framework. Moreover i believe you are > > > confuse by VFIO. To me VFIO is for VM not to create general device > > > driver frame work. > > > > VFIO is a userspace driver framework, the VM use case just happens to > > be a rather prolific one. VFIO was never intended to be solely a VM > > device interface and has several other userspace users, notably DPDK > > and SPDK, an NVMe backend in QEMU, a userspace NVMe driver, a ruby > > wrapper, and perhaps others that I'm not aware of. Whether vfio is > > appropriate interface here might certainly still be a debatable topic, > > but I would strongly disagree with your last sentence above. Thanks, > > > > Alex > > > > Yes, that is also my standpoint here. > > > > So here is your use case as i understand it. You have a device > > > with a limited number of command queues (can be just one) and in > > > some case it can support SVA/SVM (when hardware support it and it > > > is not disabled). Final requirement is being able to schedule cmds > > > from userspace without ioctl. All of this exists already exists > > > upstream in few device drivers. > > > > > > > > > So here is how every body else is doing it. Please explain why > > > this does not work. > > > > > > 1 Userspace open device file driver. Kernel device driver create > > > a context and associate it with on open. This context can be > > > uniq to the process and can bind hardware resources (like a > > > command queue) to the process. > > > 2 Userspace bind/acquire a commands queue and initialize it with > > > an ioctl on the device file. Through that ioctl userspace can > > > be inform wether either SVA/SVM works for the device. If SVA/ > > > SVM works then kernel device driver bind the process to the > > > device as part of this ioctl. > > > 3 If SVM/SVA does not work userspace do an ioctl to create dma > > > buffer or something that does exactly the same thing. > > > 4 Userspace mmap the command queue (mmap of the device file by > > > using informations gather at step 2) > > > 5 Userspace can write commands into the queue it mapped > > > 6 When userspace close the device file all resources are release > > > just like any existing device drivers. > > Hi, Jerome, > > Just one thing, as I said in the cover letter, dma-buf requires the application > to use memory created by the driver for DMA. I did try the dma-buf way in > WrapDrive (refer to [4] in the cover letter), it is a good backup for NOIOMMU > mode or we cannot solve the problem in VFIO. > > But, in many of my application scenario, the application already has some memory > in hand, maybe allocated by the framework or libraries. Anyway, they don't get > memory from my library, and they pass the poiter for data operation. And they > may also have pointer in the buffer. Those pointer may be used by the > accelerator. So I need hardware fully share the address space with the > application. That is what dmabuf cannot do. dmabuf can do that ... it is call uptr you can look at i915 for instance. Still this does not answer my question above, why do you need to be in VFIO to do any of the above thing ? Kernel has tons of examples that does all of the above and are not in VFIO (including usinng existing user pointer with device). Cheers, J?r?me