Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp287277pxv; Thu, 8 Jul 2021 02:33:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyPzLlGrLv+UelPVWsGkF52p5Pa5qgfqgzVa6piL95W3bM05aPfMQER5OUEsnoWgWau1APG X-Received: by 2002:a05:6402:90a:: with SMTP id g10mr36848270edz.365.1625736795352; Thu, 08 Jul 2021 02:33:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625736795; cv=none; d=google.com; s=arc-20160816; b=vvGRF0Aa54oiR4SWpTo9forwzQ/1pDBNNaHgL5xguwVczqmeTj4VhrqZwS55Xn5nyQ +qxyd28lCGq2y+pNHR3soBNnFMIZrmt99Qgx8OaUgh+TCU439eIFRU6yONf9s03s5wrx d4q4UWKwOdODepsZv2D5uJLFWkmZ9cclcex8Q8E/NeG0d0duA5PoFN+dseiGWIS7Vn6y Gusl8A0RBkpf3oUBypM2pOMeknqpegY+JlBKe/Q0fgj3kPJVuJXG1lmiBDut76niIX6Y h11yV9WyE4b1mCnoBXOQEwAPazq0c6+xMrfvOx3iKDXRSGtx/LgJKaFabP+W+vOKKxJc Ko4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=2HxhaOm7ruX/76oVoIZBX/Aky6tcAk0YxOnHiFgnPOI=; b=Ua+51AUYaU+ykknMVYdEovh1ggDMusMX8tf00UrKpoZAeR5FV6n4ib16cq9xLHBW+c MjME0+B9rXa1ZnA/9YOInOgIbKmQmTToW9wdYEPvRRibxHULinSMJDdZOIMd/9UVLmic Gkr10DYpkuQVYRRVO5xgcAHmlHpnV/fCHqXePyrJnHlM0qUn/H4tzgOZHBTH5bKZCvdR J7oRcz4qbTSUVyne2qo458QbnX7hSqPO9rCpfi9+b+NL5tyLq9OJriM7UQg4eNjhZsak 9acxvJRRBQ1UU6USDOlTPb2t0PsWl9pFhX/BzB8iSmK9vjhcqaTVD1yX3wvF+Z8VGxXL s10w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 8si2788213ejx.637.2021.07.08.02.32.52; Thu, 08 Jul 2021 02:33:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231290AbhGHJcD (ORCPT + 99 others); Thu, 8 Jul 2021 05:32:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231272AbhGHJcD (ORCPT ); Thu, 8 Jul 2021 05:32:03 -0400 Received: from theia.8bytes.org (8bytes.org [IPv6:2a01:238:4383:600:38bc:a715:4b6d:a889]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8853CC061574 for ; Thu, 8 Jul 2021 02:29:21 -0700 (PDT) Received: by theia.8bytes.org (Postfix, from userid 1000) id 3AF02312; Thu, 8 Jul 2021 11:29:20 +0200 (CEST) Date: Thu, 8 Jul 2021 11:29:18 +0200 From: Joerg Roedel To: David Stevens , Robin Murphy Cc: Will Deacon , Christoph Hellwig , Sergey Senozhatsky , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, David Stevens Subject: Re: [PATCH 0/4] Add dynamic iommu backed bounce buffers Message-ID: References: <20210707075505.2896824-1-stevensd@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210707075505.2896824-1-stevensd@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Adding Robin too. On Wed, Jul 07, 2021 at 04:55:01PM +0900, David Stevens wrote: > Add support for per-domain dynamic pools of iommu bounce buffers to the > dma-iommu API. This allows iommu mappings to be reused while still > maintaining strict iommu protection. Allocating buffers dynamically > instead of using swiotlb carveouts makes per-domain pools more amenable > on systems with large numbers of devices or where devices are unknown. > > When enabled, all non-direct streaming mappings below a configurable > size will go through bounce buffers. Note that this means drivers which > don't properly use the DMA API (e.g. i915) cannot use an iommu when this > feature is enabled. However, all drivers which work with swiotlb=force > should work. > > Bounce buffers serve as an optimization in situations where interactions > with the iommu are very costly. For example, virtio-iommu operations in > a guest on a linux host require a vmexit, involvement the VMM, and a > VFIO syscall. For relatively small DMA operations, memcpy can be > significantly faster. > > As a performance comparison, on a device with an i5-10210U, I ran fio > with a VFIO passthrough NVMe drive with '--direct=1 --rw=read > --ioengine=libaio --iodepth=64' and block sizes 4k, 16k, 64k, and > 128k. Test throughput increased by 2.8x, 4.7x, 3.6x, and 3.6x. Time > spent in iommu_dma_unmap_(page|sg) per GB processed decreased by 97%, > 94%, 90%, and 87%. Time spent in iommu_dma_map_(page|sg) decreased > by >99%, as bounce buffers don't require syncing here in the read case. > Running with multiple jobs doesn't serve as a useful performance > comparison because virtio-iommu and vfio_iommu_type1 both have big > locks that significantly limit mulithreaded DMA performance. > > This patch set is based on v5.13-rc7 plus the patches at [1]. > > David Stevens (4): > dma-iommu: add kalloc gfp flag to alloc helper > dma-iommu: replace device arguments > dma-iommu: expose a few helper functions to module > dma-iommu: Add iommu bounce buffers to dma-iommu api > > drivers/iommu/Kconfig | 10 + > drivers/iommu/Makefile | 1 + > drivers/iommu/dma-iommu.c | 119 ++++-- > drivers/iommu/io-buffer-pool.c | 656 +++++++++++++++++++++++++++++++++ > drivers/iommu/io-buffer-pool.h | 91 +++++ > include/linux/dma-iommu.h | 12 + > 6 files changed, 861 insertions(+), 28 deletions(-) > create mode 100644 drivers/iommu/io-buffer-pool.c > create mode 100644 drivers/iommu/io-buffer-pool.h > > -- > 2.32.0.93.g670b81a890-goog