Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1690404pxj; Wed, 19 May 2021 11:32:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxxEw6T3dWi8X0gAcx+moqYd43VQRMCe+NRI3WAfIJvk/gMjd+b0JFfY6k+I11dyd3wPaml X-Received: by 2002:a17:906:fb88:: with SMTP id lr8mr468558ejb.429.1621449175035; Wed, 19 May 2021 11:32:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621449175; cv=none; d=google.com; s=arc-20160816; b=VgH1ak97eKVlNXNYo2xGaj+NhAPTfb02sqt7aMFJUL0JAk5Ifb3QP289IKBTO7VNhL K74np912i34sPCT1KbjQAAbDXHJ6nZabX4iWm/GPQeGUEkd9PdZx9GNePZgdczZsSskB h7fWnOdUo2mIM3/w/1YEKkJpK+e0w4/hO22iffOe9ieaYZNfV1x4oTetkR1y1jAo8ugu AWBv7WIVNKa9o/d1m6tjTQjJOEVja4wx9teqpku1J+4ha3d3XgWw+qUTAN7afLguyH7+ FMW7XtKNt2OiYKgq03YcsJq4dknXUOdDuegJME+s7KSzv/BwqvHeztqzun25+xyhj/vJ q15w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=7+Of3GQlEaxF6vUCdyxydmId0hVDaVNL3IbEpWr9BNw=; b=EVs2N7ztPbsSHm2EVcIVrkEBs6L0rhxLijohrPzCNSrMgxGe9igSjoKayps8CSKv8m VQjMkAKEdTYI/IcYfr2CpDErytfSPFCQFl/QeagbLyqooBf/rrJs1y+nLim4+gZ3byaU eollg6/Nmq6b4l1dwlat+w1bFauHVRYE2d1ZPyoeiu0xifiZ+xu2SNk6/LiMojA1OzZT kB9d0dSKPVJGKBbkjv5G+n0O7/3nRO6Mdg1r2HH/SyWmK45woRoHfpDX/2q0ut7Orx4T o4bagAZJ/NK1k6Nib95fh21QgNgD297O1NY616fwcR/c/KpVoW1RFBGR6GxGqWZOBY6C /b4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=NiXwDWcH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l21si10420711edw.553.2021.05.19.11.32.31; Wed, 19 May 2021 11:32:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=NiXwDWcH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351832AbhERTAn (ORCPT + 99 others); Tue, 18 May 2021 15:00:43 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:30594 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351749AbhERTA0 (ORCPT ); Tue, 18 May 2021 15:00:26 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1621364347; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7+Of3GQlEaxF6vUCdyxydmId0hVDaVNL3IbEpWr9BNw=; b=NiXwDWcHLNli9dqOVB1uHBPvtLWm/BboXn0JYX34SYR6947m/oms3nYSiSkaKYQayXGTIx gox3RJ1odR72p658oAqEfy841qq9eXuz2uCBhfhkKYkkfYjwElbkG3dm0QbDpGqSOKsaoD E9RuscrJQ6d+dicsk29O7shJe4X1qUg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-291-xyyVlO3AOVOBG8smG7_f5w-1; Tue, 18 May 2021 14:59:03 -0400 X-MC-Unique: xyyVlO3AOVOBG8smG7_f5w-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 164BE801106; Tue, 18 May 2021 18:59:00 +0000 (UTC) Received: from redhat.com (ovpn-113-225.phx2.redhat.com [10.3.113.225]) by smtp.corp.redhat.com (Postfix) with ESMTP id 23B5F1037F46; Tue, 18 May 2021 18:58:59 +0000 (UTC) Date: Tue, 18 May 2021 12:57:56 -0600 From: Alex Williamson To: Shenming Lu Cc: Cornelia Huck , Will Deacon , Robin Murphy , Joerg Roedel , Jean-Philippe Brucker , Eric Auger , , , , , , Kevin Tian , Lu Baolu , , Christoph Hellwig , Jonathan Cameron , Barry Song , , Subject: Re: [RFC PATCH v3 0/8] Add IOPF support for VFIO passthrough Message-ID: <20210518125756.4c075300.alex.williamson@redhat.com> In-Reply-To: <20210409034420.1799-1-lushenming@huawei.com> References: <20210409034420.1799-1-lushenming@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 9 Apr 2021 11:44:12 +0800 Shenming Lu wrote: > Hi, > > Requesting for your comments and suggestions. :-) > > The static pinning and mapping problem in VFIO and possible solutions > have been discussed a lot [1, 2]. One of the solutions is to add I/O > Page Fault support for VFIO devices. Different from those relatively > complicated software approaches such as presenting a vIOMMU that provides > the DMA buffer information (might include para-virtualized optimizations), > IOPF mainly depends on the hardware faulting capability, such as the PCIe > PRI extension or Arm SMMU stall model. What's more, the IOPF support in > the IOMMU driver has already been implemented in SVA [3]. So we add IOPF > support for VFIO passthrough based on the IOPF part of SVA in this series. The SVA proposals are being reworked to make use of a new IOASID object, it's not clear to me that this shouldn't also make use of that work as it does a significant expansion of the type1 IOMMU with fault handlers that would duplicate new work using that new model. > We have measured its performance with UADK [4] (passthrough an accelerator > to a VM(1U16G)) on Hisilicon Kunpeng920 board (and compared with host SVA): > > Run hisi_sec_test... > - with varying sending times and message lengths > - with/without IOPF enabled (speed slowdown) > > when msg_len = 1MB (and PREMAP_LEN (in Patch 4) = 1): > slowdown (num of faults) > times VFIO IOPF host SVA > 1 63.4% (518) 82.8% (512) > 100 22.9% (1058) 47.9% (1024) > 1000 2.6% (1071) 8.5% (1024) > > when msg_len = 10MB (and PREMAP_LEN = 512): > slowdown (num of faults) > times VFIO IOPF > 1 32.6% (13) > 100 3.5% (26) > 1000 1.6% (26) It seems like this is only an example that you can make a benchmark show anything you want. The best results would be to pre-map everything, which is what we have without this series. What is an acceptable overhead to incur to avoid page pinning? What if userspace had more fine grained control over which mappings were available for faulting and which were statically mapped? I don't really see what sense the pre-mapping range makes. If I assume the user is QEMU in a non-vIOMMU configuration, pre-mapping the beginning of each RAM section doesn't make any logical sense relative to device DMA. Comments per patch to follow. Thanks, Alex > History: > > v2 -> v3 > - Nit fixes. > - No reason to disable reporting the unrecoverable faults. (baolu) > - Maintain a global IOPF enabled group list. > - Split the pre-mapping optimization to be a separate patch. > - Add selective faulting support (use vfio_pin_pages to indicate the > non-faultable scope and add a new struct vfio_range to record it, > untested). (Kevin) > > v1 -> v2 > - Numerous improvements following the suggestions. Thanks a lot to all > of you. > > Note that PRI is not supported at the moment since there is no hardware. > > Links: > [1] Lesokhin I, et al. Page Fault Support for Network Controllers. In ASPLOS, > 2016. > [2] Tian K, et al. coIOMMU: A Virtual IOMMU with Cooperative DMA Buffer Tracking > for Efficient Memory Management in Direct I/O. In USENIX ATC, 2020. > [3] https://patchwork.kernel.org/project/linux-arm-kernel/cover/20210401154718.307519-1-jean-philippe@linaro.org/ > [4] https://github.com/Linaro/uadk > > Thanks, > Shenming > > > Shenming Lu (8): > iommu: Evolve the device fault reporting framework > vfio/type1: Add a page fault handler > vfio/type1: Add an MMU notifier to avoid pinning > vfio/type1: Pre-map more pages than requested in the IOPF handling > vfio/type1: VFIO_IOMMU_ENABLE_IOPF > vfio/type1: No need to statically pin and map if IOPF enabled > vfio/type1: Add selective DMA faulting support > vfio: Add nested IOPF support > > .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 3 +- > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 18 +- > drivers/iommu/iommu.c | 56 +- > drivers/vfio/vfio.c | 85 +- > drivers/vfio/vfio_iommu_type1.c | 1000 ++++++++++++++++- > include/linux/iommu.h | 19 +- > include/linux/vfio.h | 13 + > include/uapi/linux/iommu.h | 4 + > include/uapi/linux/vfio.h | 6 + > 9 files changed, 1181 insertions(+), 23 deletions(-) >