Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp1280674pxb; Fri, 21 Jan 2022 14:13:42 -0800 (PST) X-Google-Smtp-Source: ABdhPJzPXpYCxJeJZ09ULTnKoaH8g51ONKIdylJWRAxj680Egvi/iLzhlDpgQ84uDybJMjM+JmIU X-Received: by 2002:a17:902:f54d:b0:14a:1802:7c18 with SMTP id h13-20020a170902f54d00b0014a18027c18mr5791144plf.90.1642803222753; Fri, 21 Jan 2022 14:13:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1642803222; cv=none; d=google.com; s=arc-20160816; b=dMXDtLxh8Lz89TdOjjQAhkzgMfTl4ChbgiWCbx+MC//4q9pDTcCYWsFfsCPh2wTRHE YW/Hr4TE9mYtAQZiSUCUdbolH5sYug0rObH+6f2E2YtfYLf5VBjqxwNX8fc0t5NnRH54 beV8vikwVYIwx9Wjo/cdC6oijsgiieJ1dtIcsd3qU+ZqhiCGqg2B3+cnhN2vdI1mQfYb XPxxPOqonMUyfw0UsQUunnDlvlgZ4E9afo8wpTtHyqAExCxDE3iq7Je9s41/TEsv7G6d uAiRc36Sh1AESsmy1RDaJ1jsPCfR2W2KSJyyhQngO9HrFDCThGXOVxQ49AGD/Q6K4qgG SUlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=U95Qo7AMSMTJi1odfC6sgjb6ovcA+XYXGX0vx2AzhzM=; b=I6KBFrmv7vQenlRoWzzMU9R/KzYWnHUZghLSb3nfNXANEKyO0W+9rBQ0iwlLaIo2wj LYXexuEuoBTaqRGTy/QRL1okjEuR9FXma5jKfmgGnBfGVxwkgPfOK99nysZ41KH6Lbd5 1nT7+yVfaoeutfDyRnnHxPTyZVc5crWCmnBwy1HYDqBiinz039jwouCi6gZLp0ALuwFc gFDOXOkRbG+q/Irra0nqNdaoaYaqF5UoDykM9Hq4ib8Vi1qwsY3/ERwBH+rLbNdBrga/ ozoIKBqxr79pXpvQ9ejjbJj8+v7CoQw8DnDN/Aiy3IF6awAMQ3DD3jZ4DTMiLtjEstWN Rikg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 11si7317427pfl.113.2022.01.21.14.13.30; Fri, 21 Jan 2022 14:13:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242317AbiATN4I (ORCPT + 99 others); Thu, 20 Jan 2022 08:56:08 -0500 Received: from verein.lst.de ([213.95.11.211]:44706 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241390AbiATN4H (ORCPT ); Thu, 20 Jan 2022 08:56:07 -0500 Received: by verein.lst.de (Postfix, from userid 2407) id 8957B68BEB; Thu, 20 Jan 2022 14:56:02 +0100 (CET) Date: Thu, 20 Jan 2022 14:56:02 +0100 From: Christoph Hellwig To: Jason Gunthorpe Cc: Matthew Wilcox , linux-kernel@vger.kernel.org, Christoph Hellwig , Joao Martins , John Hubbard , Logan Gunthorpe , Ming Lei , linux-block@vger.kernel.org, netdev@vger.kernel.org, linux-mm@kvack.org, linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org, nvdimm@lists.linux.dev Subject: Re: Phyr Starter Message-ID: <20220120135602.GA11223@lst.de> References: <20220111004126.GJ2328285@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220111004126.GJ2328285@nvidia.com> User-Agent: Mutt/1.5.17 (2007-11-01) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 10, 2022 at 08:41:26PM -0400, Jason Gunthorpe wrote: > > Finally, it may be possible to stop using scatterlist to describe the > > input to the DMA-mapping operation. We may be able to get struct > > scatterlist down to just dma_address and dma_length, with chaining > > handled through an enclosing struct. > > Can you talk about this some more? IMHO one of the key properties of > the scatterlist is that it can hold huge amounts of pages without > having to do any kind of special allocation due to the chaining. > > The same will be true of the phyr idea right? No special allocations as in no vmalloc? The chaining still has to allocate memory using a mempool. Anyway, to explain my idea which is very similar but not identical to the one willy has: - on the input side to dma mapping the bio_vecs (or phyrs) are chained as bios or whatever the containing structure is. These already exist and have infrastructure at least in the block layer - on the output side I plan for two options: 1) we have a sane IOMMU and everyting will be coalesced into a single dma_range. This requires setting the block layer merge boundary to match the IOMMU page size, but that is a very good thing to do anyway. 2) we have no IOMMU (or a weird one) and get one output dma_range per input bio_vec. We'd eithe have to support chaining or use vmalloc or huge numbers of entries. > If you limit to that scenario then we can be more optimal because > things like byte granular offsets and size in the interior pages don't > need to exist. Every interior chunk is always aligned to its order and > we only need to record the order. The block layer does not small offsets. Direct I/O can often be 512 byte aligned, and some other passthrough commands can have even smaller alignment, although I don't think we ever go below 4-byte alignment anywhere in the block layer. > IMHO storage density here is quite important, we end up having to keep > this stuff around for a long time. If we play these tricks it won't be general purpose enough to get rid of the existing scatterlist usage.