Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp2687210ybb; Sun, 22 Mar 2020 05:28:03 -0700 (PDT) X-Google-Smtp-Source: ADFU+vtMML6ZmY5J8s3iR3yHcu5zZgk9KUHKDgHqCj7J15TDFdBKHUCQPA4T44wDkN5nGL8Lh4Xe X-Received: by 2002:aca:f591:: with SMTP id t139mr12919597oih.153.1584880083057; Sun, 22 Mar 2020 05:28:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584880083; cv=none; d=google.com; s=arc-20160816; b=c5/MoqJMaTqgi29+wGPqOFQ7hTqAqC1t1CnFGuFGl/VLVvJcdqagikFG/4BodAr0ux rGD+MmoC7X27kpNx+3JC+DtdWpVUdFowMGn9Ps3v9zGbzgeiw8rZNbFD+1Ve96RQ6GrP 78WI8jm88wLWBmByPGmDoIn/k1gtpMrR3BiiUJi9LHUBccZdPZV56oklyqQoORNkmvX2 2eyT10IUQXbkTCx3hwAH8xZYFOYO6OdN9QLFdvNUK1cRynettL3hGi7ILNxkvzzgOJ2l wG3Qo7uf8AKOfDw3zcE6OTso+91W8uBxd0vUnASGzc9P+Zmx976jwBKtTdypYzd5hxFi Z+Pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=Lx3QWh5VkIFpYBfPlYtYCO/KQnXQ78odUNnxzHUe0iA=; b=fw8eM7erwcO0p1+C1soDyTlgKL4vg9nQ78luPCfsqZUdwqKirVrVdxNhJQDZ3isclE /rdvvt0u/+0+7TQk5fFbGJI5xchSrdO1Zx0EP26vepzXCAAFu6AA27h9VRgpwu2dfzOV KNZ9HCCMOnVG1lIwfMJ9sh3cci9GS74ggs3lew6vHPAj1fXS8uNdf/OmgfUyjYfSkqxQ 0+PtYaqGEqVCJya3R1QNBEBZFVTlZZOkvCIoFsakJTslqv+6J5oCriIDiven4YvxxZZu I6EoAPCXWuCPbK7//GanAiu2kcE3m8eR8+zTIsy3JJZXD9SaseowJ9GcgUp4eGSqriYr u2eg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f5si5748214otq.73.2020.03.22.05.27.51; Sun, 22 Mar 2020 05:28:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727217AbgCVM0x (ORCPT + 99 others); Sun, 22 Mar 2020 08:26:53 -0400 Received: from mga18.intel.com ([134.134.136.126]:51562 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726984AbgCVM0Z (ORCPT ); Sun, 22 Mar 2020 08:26:25 -0400 IronPort-SDR: VbsgSx7gSsGf0N6hgS7G+ORwFbJCKjK9GL+4sY/Na9D3xsa/K+iMMK0VegOFi7XmAvrjF+u+tB YrsqFxuW5C5g== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Mar 2020 05:26:23 -0700 IronPort-SDR: Cy6rHL7Yt8Bx00K+whxJzVWM95dDmNBC+s3FORZdpg1GwjHmSDiRC2tikfoh+bCYeuAvqfteE8 Zdeyt14VdXSw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,292,1580803200"; d="scan'208";a="239663862" Received: from jacob-builder.jf.intel.com ([10.7.199.155]) by orsmga008.jf.intel.com with ESMTP; 22 Mar 2020 05:26:23 -0700 From: "Liu, Yi L" To: alex.williamson@redhat.com, eric.auger@redhat.com Cc: kevin.tian@intel.com, jacob.jun.pan@linux.intel.com, joro@8bytes.org, ashok.raj@intel.com, yi.l.liu@intel.com, jun.j.tian@intel.com, yi.y.sun@intel.com, jean-philippe@linaro.org, peterx@redhat.com, iommu@lists.linux-foundation.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, hao.wu@intel.com Subject: [PATCH v1 0/8] vfio: expose virtual Shared Virtual Addressing to VMs Date: Sun, 22 Mar 2020 05:31:57 -0700 Message-Id: <1584880325-10561-1-git-send-email-yi.l.liu@intel.com> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Liu Yi L Shared Virtual Addressing (SVA), a.k.a, Shared Virtual Memory (SVM) on Intel platforms allows address space sharing between device DMA and applications. SVA can reduce programming complexity and enhance security. This VFIO series is intended to expose SVA usage to VMs. i.e. Sharing guest application address space with passthru devices. This is called vSVA in this series. The whole vSVA enabling requires QEMU/VFIO/IOMMU changes. For IOMMU and QEMU changes, they are in separate series (listed in the "Related series"). The high-level architecture for SVA virtualization is as below, the key design of vSVA support is to utilize the dual-stage IOMMU translation ( also known as IOMMU nesting translation) capability in host IOMMU. .-------------. .---------------------------. | vIOMMU | | Guest process CR3, FL only| | | '---------------------------' .----------------/ | PASID Entry |--- PASID cache flush - '-------------' | | | V | | CR3 in GPA '-------------' Guest ------| Shadow |--------------------------|-------- v v v Host .-------------. .----------------------. | pIOMMU | | Bind FL for GVA-GPA | | | '----------------------' .----------------/ | | PASID Entry | V (Nested xlate) '----------------\.------------------------------. | | |SL for GPA-HPA, default domain| | | '------------------------------' '-------------' Where: - FL = First level/stage one page tables - SL = Second level/stage two page tables There are roughly four parts in this patchset which are corresponding to the basic vSVA support for PCI device assignment 1. vfio support for PASID allocation and free for VMs 2. vfio support for guest page table binding request from VMs 3. vfio support for IOMMU cache invalidation from VMs 4. vfio support for vSVA usage on IOMMU-backed mdevs The complete vSVA kernel upstream patches are divided into three phases: 1. Common APIs and PCI device direct assignment 2. IOMMU-backed Mediated Device assignment 3. Page Request Services (PRS) support This patchset is aiming for the phase 1 and phase 2, and based on Jacob's below series. [PATCH V10 00/11] Nested Shared Virtual Address (SVA) VT-d support: https://lkml.org/lkml/2020/3/20/1172 Complete set for current vSVA can be found in below branch. https://github.com/luxis1999/linux-vsva.git: vsva-linux-5.6-rc6 The corresponding QEMU patch series is as below, complete QEMU set can be found in below branch. [PATCH v1 00/22] intel_iommu: expose Shared Virtual Addressing to VMs complete QEMU set can be found in below link: https://github.com/luxis1999/qemu.git: sva_vtd_v10_v1 Regards, Yi Liu Changelog: - RFC v1 -> Patch v1: a) Address comments to the PASID request(alloc/free) path b) Report PASID alloc/free availabitiy to user-space c) Add a vfio_iommu_type1 parameter to support pasid quota tuning d) Adjusted to latest ioasid code implementation. e.g. remove the code for tracking the allocated PASIDs as latest ioasid code will track it, VFIO could use ioasid_free_set() to free all PASIDs. - RFC v2 -> v3: a) Refine the whole patchset to fit the roughly parts in this series b) Adds complete vfio PASID management framework. e.g. pasid alloc, free, reclaim in VM crash/down and per-VM PASID quota to prevent PASID abuse. c) Adds IOMMU uAPI version check and page table format check to ensure version compatibility and hardware compatibility. d) Adds vSVA vfio support for IOMMU-backed mdevs. - RFC v1 -> v2: Dropped vfio: VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE. Liu Yi L (8): vfio: Add VFIO_IOMMU_PASID_REQUEST(alloc/free) vfio/type1: Add vfio_iommu_type1 parameter for quota tuning vfio/type1: Report PASID alloc/free support to userspace vfio: Check nesting iommu uAPI version vfio/type1: Report 1st-level/stage-1 format to userspace vfio/type1: Bind guest page tables to host vfio/type1: Add VFIO_IOMMU_CACHE_INVALIDATE vfio/type1: Add vSVA support for IOMMU-backed mdevs drivers/vfio/vfio.c | 136 +++++++++++++ drivers/vfio/vfio_iommu_type1.c | 419 ++++++++++++++++++++++++++++++++++++++++ include/linux/vfio.h | 21 ++ include/uapi/linux/vfio.h | 127 ++++++++++++ 4 files changed, 703 insertions(+) -- 2.7.4