Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp1593801imm; Wed, 1 Aug 2018 20:00:46 -0700 (PDT) X-Google-Smtp-Source: AAOMgpd7pu7IDhtY8hu73eJf2iTYbZtj/5xGRF+i3ukD7gxRYK3Xe2MsK9T98a+YZ7eSS/z+1EtD X-Received: by 2002:a62:5cc1:: with SMTP id q184-v6mr958124pfb.241.1533178846893; Wed, 01 Aug 2018 20:00:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533178846; cv=none; d=google.com; s=arc-20160816; b=BMzvYhzeel2/LnZ2HfRGYoljEGUki7A7dML3lQ22dXec99YbF2SP1q0a4imh0F8fPJ 10ChEe5GkJ39GbQoTPli5RklcDkcbms/ibttFdmN/MUeLkju3o34y8UteAIvpK6KFcYg 0sAucI5GSWHXTco2gTVgyTDmLA+hX77QMZQkLFw6CAsCU2qeUK29RWLm1MBHr/PQVpQP r6wpmUXV7f1gvx345fRNl7UCS01OK4Try/H6WRoupQ53zwpSd9eS6CRkItPLstvcAZih 9IFY9YLVvtwHqM4g3JMgRz23kd6DpTDr3alLed7fQlKxaUq18BT+Dv3H//oBvDCL3GBu pqZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:content-transfer-encoding :dlp-reaction:dlp-version:dlp-product:content-language :accept-language:in-reply-to:references:message-id:date:thread-index :thread-topic:subject:cc:to:from:arc-authentication-results; bh=3s5+ToA+s4lHiR5PkyMTYTabckwyQuNDV6ydvOtK5K8=; b=Vtdj5lMnWGMOBBf5lSMcRW45EJ454+BHlvPYjWny+iyXB9/mTW4I1zDWS/6X7pZ/Wr vbZkKVr1GjvRz7fiYMUdgXIjBmSq894/Sc7L/EMaiOwpyQXgjLsR5Lk5Rg+sECcmTMLX ZPaY4x3Tqp+Ab8I3ROVLDgxKWBH/ALd0YdttFcfX7JtPsPtp1bJ0hOtnB22sm5O5QwEm Zjr9KsSW8xsrzcKZ8tzwsZ2NZkI08jFVCcnxMdo+xFKsB8aa5fu2TyxMQYdK7HfNp3ti oEuvGrvZnD31YLe4Qu7eJbkCqohE0/fBSuYi1C1upqrTIYgP4YK4BJa72rg3bW1x0r9W fq+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d191-v6si649419pga.157.2018.08.01.20.00.32; Wed, 01 Aug 2018 20:00:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726125AbeHBEsi convert rfc822-to-8bit (ORCPT + 99 others); Thu, 2 Aug 2018 00:48:38 -0400 Received: from mga04.intel.com ([192.55.52.120]:28638 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725765AbeHBEsi (ORCPT ); Thu, 2 Aug 2018 00:48:38 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Aug 2018 19:59:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,434,1526367600"; d="scan'208";a="71882317" Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by orsmga003.jf.intel.com with ESMTP; 01 Aug 2018 19:59:42 -0700 Received: from fmsmsx154.amr.corp.intel.com (10.18.116.70) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.319.2; Wed, 1 Aug 2018 19:59:36 -0700 Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by FMSMSX154.amr.corp.intel.com (10.18.116.70) with Microsoft SMTP Server (TLS) id 14.3.319.2; Wed, 1 Aug 2018 19:59:36 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.57]) by shsmsx102.ccr.corp.intel.com ([169.254.2.124]) with mapi id 14.03.0319.002; Thu, 2 Aug 2018 10:59:33 +0800 From: "Tian, Kevin" To: Kenneth Lee , Jonathan Corbet , Herbert Xu , "David S . Miller" , Joerg Roedel , Alex Williamson , Kenneth Lee , Hao Fang , Zhou Wang , Zaibo Xu , Philippe Ombredanne , "Greg Kroah-Hartman" , Thomas Gleixner , "linux-doc@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-crypto@vger.kernel.org" , "iommu@lists.linux-foundation.org" , "kvm@vger.kernel.org" , "linux-accelerators@lists.ozlabs.org" , Lu Baolu , "Kumar, Sanjay K" CC: "linuxarm@huawei.com" Subject: RE: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive Thread-Topic: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive Thread-Index: AQHUKYHSUTvc9I1u4EGviW0yiW9YHKSrv1Ww Date: Thu, 2 Aug 2018 02:59:33 +0000 Message-ID: References: <20180801102221.5308-1-nek.in.cn@gmail.com> In-Reply-To: <20180801102221.5308-1-nek.in.cn@gmail.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ctpclassification: CTP_NT x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYjZhMmE1MmItZDAyMS00NDRkLTllZTItNDIxOTNkZjRhNzJkIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiRmtrRWhDTG9GbVZOclNBbFE4U0NYeEd0aHE0VTQ4ODA4K3BkNnFHSVpueXZibXlxYXc1aVwvQTRQRitxY0JrRmkifQ== dlp-product: dlpe-windows dlp-version: 11.0.400.15 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > From: Kenneth Lee > Sent: Wednesday, August 1, 2018 6:22 PM > > From: Kenneth Lee > > WarpDrive is an accelerator framework to expose the hardware capabilities > directly to the user space. It makes use of the exist vfio and vfio-mdev > facilities. So the user application can send request and DMA to the > hardware without interaction with the kernel. This remove the latency > of syscall and context switch. > > The patchset contains documents for the detail. Please refer to it for more > information. > > This patchset is intended to be used with Jean Philippe Brucker's SVA > patch [1] (Which is also in RFC stage). But it is not mandatory. This > patchset is tested in the latest mainline kernel without the SVA patches. > So it support only one process for each accelerator. If no sharing, then why not just assigning the whole parent device to the process? IMO if SVA usage is the clear goal of your series, it might be made clearly so then Jean's series is mandatory dependency... > > With SVA support, WarpDrive can support multi-process in the same > accelerator device. We tested it in our SoC integrated Accelerator (board > ID: D06, Chip ID: HIP08). A reference work tree can be found here: [2]. > > We have noticed the IOMMU aware mdev RFC announced recently [3]. > > The IOMMU aware mdev has similar idea but different intention comparing > to > WarpDrive. It intends to dedicate part of the hardware resource to a VM. Not just to VM, though I/O Virtualization is in the name. You can assign such mdev to either VMs, containers, or bare metal processes. It's just a fully-isolated device from user space p.o.v. > And the design is supposed to be used with Scalable I/O Virtualization. > While spimdev is intended to share the hardware resource with a big > amount > of processes. It just requires the hardware supporting address > translation per process (PCIE's PASID or ARM SMMU's substream ID). > > But we don't see serious confliction on both design. We believe they can be > normalized as one. yes there are something which can be shared, e.g. regarding to the interface to IOMMU. Conceptually I see them different mindset on device resource sharing: WarpDrive more aims to provide a generic framework to enable SVA usages on various accelerators, which lack of a well-abstracted user API like OpenCL. SVA is a hardware capability - sort of exposing resources composing ONE capability to user space through mdev framework. It is not like a VF which naturally carries most capabilities as PF. Intel Scalable I/O virtualization is a thorough design to partition the device into minimal sharable copies (queue, queue pair, context), while each copy carries most PF capabilities (including SVA) similar to VF. Also with IOMMU scalable mode support, the copy can be independently assigned to any client (process, container, VM, etc.) Thanks Kevin