Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp395671ybt; Wed, 24 Jun 2020 01:52:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyCrFnB5Y0EWXdhJQi7K++d79znJyAU90rRsln8Aj+x5xzgRwRYGiETLz5NW7CngqShxfhC X-Received: by 2002:a17:906:c150:: with SMTP id dp16mr23401690ejc.536.1592988746488; Wed, 24 Jun 2020 01:52:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592988746; cv=none; d=google.com; s=arc-20160816; b=G6yPEJDVcjsT9ruYYd2EkECj2EqaBpzr1UwLrFXw+5fg1xrGgGhH31Cfd6WCTa+t3m pOObNZZ/eKEq9qjIFSh6qJhfPmiuImdcT3Gvi3/aUDjVAFsB0537HsOzAPcTrJ7aLV5B SnFojmTW/j4ULyX/YryDPdC1z3urAUaX0RQyzMs7t0xAcnQ3z8OaJTYjLB1lRyRP7AYx jOT4K92ew1tuxEbrs+zTMXxtbPMmnntdIzviQtXUl9lmzR06EiwrLt7NbxkosECP9+YS KF+kuw5/tz+O+yrXHzn7YaLKyh+jog5gVG/nncG/J3XrxHVMSZGyIizWDY7RJvn9rxJb TABg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:ironport-sdr:ironport-sdr; bh=Ti2GSYRNB/JCrXcFa0UCLRVuDHJhZOwd+9h6oSZFnUg=; b=e/Gm6n7/zamzdOg4uunnxuE+SOrroAYfPVvLTvUkBlqL9MyEBxNsz4E4Cmd/la3HNt QK9g2giOnM1CptvNZuGta6hKZbWQmcKJK44LRNFrldF5C/k9qkzpAV1ymjbtCSH2H5aN gYBNjEQ4MSw6ah3Od/S+AS0Iq7bjt5uxMBA/dGG81pwz+jxKeF3XTr4ZLEFuHflr7MEX qUeDqHcbE4C/U04in0Fx2dGgSBVvlk4oOMDtutk3au13HDEPrFUVj2+toAYE2j0QgJtV PRSH1MklvB+WFoK19FM4YH6+XIRs+hE1IPRXiqcnfqYMvHMgME40hXLacr56cyJBMjev 0YcQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u22si12892757eds.253.2020.06.24.01.52.03; Wed, 24 Jun 2020 01:52:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389218AbgFXItk (ORCPT + 99 others); Wed, 24 Jun 2020 04:49:40 -0400 Received: from mga01.intel.com ([192.55.52.88]:1309 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388913AbgFXItE (ORCPT ); Wed, 24 Jun 2020 04:49:04 -0400 IronPort-SDR: Sz7rPHSjVSe5wPBT/oWXGkLFpCRSJ67uOkW9bRYloZL87hOVAOn9zAemEwM1I8yQJNZRMlKSAC mYqW75pGZdmg== X-IronPort-AV: E=McAfee;i="6000,8403,9661"; a="162484888" X-IronPort-AV: E=Sophos;i="5.75,274,1589266800"; d="scan'208";a="162484888" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jun 2020 01:48:57 -0700 IronPort-SDR: NZ7BSxtJikNVPALknIpfd8Y5zn1lr07WGKCF0SseHKBjTTjrYjk1sSEIJFHaoEweL5NrqzVDov mRwKvnqc5kxw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,274,1589266800"; d="scan'208";a="275624532" Received: from jacob-builder.jf.intel.com ([10.7.199.155]) by orsmga003.jf.intel.com with ESMTP; 24 Jun 2020 01:48:56 -0700 From: Liu Yi L To: alex.williamson@redhat.com, eric.auger@redhat.com, baolu.lu@linux.intel.com, joro@8bytes.org Cc: kevin.tian@intel.com, jacob.jun.pan@linux.intel.com, ashok.raj@intel.com, yi.l.liu@intel.com, jun.j.tian@intel.com, yi.y.sun@intel.com, jean-philippe@linaro.org, peterx@redhat.com, hao.wu@intel.com, iommu@lists.linux-foundation.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 13/14] vfio: Document dual stage control Date: Wed, 24 Jun 2020 01:55:26 -0700 Message-Id: <1592988927-48009-14-git-send-email-yi.l.liu@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592988927-48009-1-git-send-email-yi.l.liu@intel.com> References: <1592988927-48009-1-git-send-email-yi.l.liu@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Eric Auger The VFIO API was enhanced to support nested stage control: a bunch of new iotcls and usage guideline. Let's document the process to follow to set up nested mode. Cc: Kevin Tian CC: Jacob Pan Cc: Alex Williamson Cc: Eric Auger Cc: Jean-Philippe Brucker Cc: Joerg Roedel Cc: Lu Baolu Signed-off-by: Eric Auger Signed-off-by: Liu Yi L --- v2 -> v3: *) address comments from Stefan Hajnoczi v1 -> v2: *) new in v2, compared with Eric's original version, pasid table bind and fault reporting is removed as this series doesn't cover them. Original version from Eric. https://lkml.org/lkml/2020/3/20/700 Documentation/driver-api/vfio.rst | 67 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 67 insertions(+) diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst index f1a4d3c..639890f 100644 --- a/Documentation/driver-api/vfio.rst +++ b/Documentation/driver-api/vfio.rst @@ -239,6 +239,73 @@ group and can access them as follows:: /* Gratuitous device reset and go... */ ioctl(device, VFIO_DEVICE_RESET); +IOMMU Dual Stage Control +------------------------ + +Some IOMMUs support 2 stages/levels of translation. Stage corresponds to +the ARM terminology while level corresponds to Intel's VTD terminology. +In the following text we use either without distinction. + +This is useful when the guest is exposed with a virtual IOMMU and some +devices are assigned to the guest through VFIO. Then the guest OS can use +stage 1 (GIOVA -> GPA or GVA->GPA), while the hypervisor uses stage 2 for +VM isolation (GPA -> HPA). + +Under dual stage translation, the guest gets ownership of the stage 1 page +tables and also owns stage 1 configuration structures. The hypervisor owns +the root configuration structure (for security reason), including stage 2 +configuration. This works as long as configuration structures and page table +formats are compatible between the virtual IOMMU and the physical IOMMU. + +Assuming the HW supports it, this nested mode is selected by choosing the +VFIO_TYPE1_NESTING_IOMMU type through: + + ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU); + +This forces the hypervisor to use the stage 2, leaving stage 1 available +for guest usage. The guest stage 1 format depends on IOMMU vendor, and +it is the same with the nesting configuration method. User space should +check the format and configuration method after setting nesting type by +using: + + ioctl(container->fd, VFIO_IOMMU_GET_INFO, &nesting_info); + +Details can be found in Documentation/userspace-api/iommu.rst. For Intel +VT-d, each stage 1 page table is bound to host by: + + nesting_op->flags = VFIO_IOMMU_NESTING_OP_BIND_PGTBL; + memcpy(&nesting_op->data, &bind_data, sizeof(bind_data)); + ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op); + +As mentioned above, guest OS may use stage 1 for GIOVA->GPA or GVA->GPA. +GVA->GPA page tables are available when PASID (Process Address Space ID) +is exposed to guest. e.g. guest with PASID-capable devices assigned. For +such page table binding, the bind_data should include PASID info, which +is allocated by guest itself or by host. This depends on hardware vendor +e.g. Intel VT-d requires to allocate PASID from host. This requirement is +defined by the Virtual Command Support in VT-d 3.0 spec, guest software +running on VT-d should allocate PASID from host kernel. To allocate PASID +from host, user space should +check the IOMMU_NESTING_FEAT_SYSWIDE_PASID +bit of the nesting info reported from host kernel. VFIO reports the nesting +info by VFIO_IOMMU_GET_INFO. User space could allocate PASID from host by: + + req.flags = VFIO_IOMMU_ALLOC_PASID; + ioctl(container, VFIO_IOMMU_PASID_REQUEST, &req); + +With first stage/level page table bound to host, it allows to combine the +guest stage 1 translation along with the hypervisor stage 2 translation to +get final address. + +When the guest invalidates stage 1 related caches, invalidations must be +forwarded to the host through + + nesting_op->flags = VFIO_IOMMU_NESTING_OP_CACHE_INVLD; + memcpy(&nesting_op->data, &inv_data, sizeof(inv_data)); + ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op); + +Those invalidations can happen at various granularity levels, page, context, +... + VFIO User API ------------------------------------------------------------------------------- -- 2.7.4