Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp1507390pxb; Wed, 30 Mar 2022 05:15:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwUcMTdvLgUpstUP2TwNraBbAxSyfRphdSYAu43Jb+4rb1r5NW1zWafkKVm5sf7/gULo5Xb X-Received: by 2002:a05:6402:11d0:b0:419:65c5:cde4 with SMTP id j16-20020a05640211d000b0041965c5cde4mr10449324edw.73.1648642539991; Wed, 30 Mar 2022 05:15:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1648642539; cv=none; d=google.com; s=arc-20160816; b=RSv5WDNSzM+0233SAkqOUOHccyNK1XfcmZubg5q3b9kKsMcwTVMexSb9sMpy17ZWoA qLgagp+RXqyvi55TE84vwk6YCVHt5iMqPMYQREhpww3BsI5zZXIt7uuh7HpsnQExjTyP 7ovypuiezRQb0OV+z4Q+rJ4kTh8Z3V9Hxf7TdEzKFlpUx8tlXn6s0Zx9TUHiaYpm/P9s emEBnhaCHZZHEKXmGpSscWDuu1GczINf2pPITSH0YrHp3iI0I27qy1XYNlm8bMo3MWct z9ktXrfuOkm0agN/D3BMOtz6kaB7XRmZbv9SbAWKEMpZ92/zpAbZoRqLEPAwzBIDGr2U 74Pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:cc:to:subject:from:date :dkim-signature; bh=ydnHfdEEE0cGh6LLp0gLHchr/PKeynmaO3DCRfqS8HA=; b=SfEtov4s+MVydVSP+GlbboOcizMm5rYGVBr/Q2m3tH4ypcx70gFgcbqlaPvc5vR2UE J89N6c65MC1gLmhVQtYK/YqMfoq2IqUEa6j5fMIlgkcHaqsCkuo3d4QwXU9NQ9GCCvv2 HHpheKWPjd2mpG5lKIOXyyDghdbDFf/l2cUbtBwBRs2WLP8Fr4QvXEE79c74xo+e0WvO cx5ErrEa8LmITsyK9IG4NfOUdrt/2H9M1DtPTGX6RCkQwirR8Jw+FL7flFRpKjr2Uia4 IkKQ/l1rk8SemI/ag/K47QHR0bVHxG5zzrOpuRmvdnkkRJ/fnuFZrObNqMsCn09TMis9 wrpA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=zT3ds71t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y26-20020a17090614da00b006df76385cfasi20295388ejc.410.2022.03.30.05.14.57; Wed, 30 Mar 2022 05:15:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=zT3ds71t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235834AbiC2Rg6 (ORCPT + 99 others); Tue, 29 Mar 2022 13:36:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42346 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240237AbiC2Rg4 (ORCPT ); Tue, 29 Mar 2022 13:36:56 -0400 Received: from aposti.net (aposti.net [89.234.176.197]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C377D33E9E; Tue, 29 Mar 2022 10:35:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=crapouillou.net; s=mail; t=1648575309; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ydnHfdEEE0cGh6LLp0gLHchr/PKeynmaO3DCRfqS8HA=; b=zT3ds71tygHqEtam0GxwVSYvIN6zC+o3vlPdEROo1KmNjd9Lytb/tuDBWH0+ngULRl6ZOg tR3AOOUrxW6Ix95y5pMyo13nJCcMvQpGhR6U6otajP+TZK2x9dkWwrIhlc6D0alP4pw9gh cHm/9lTvIKvCdZHpN/Xwuy0xbSzswww= Date: Tue, 29 Mar 2022 18:34:58 +0100 From: Paul Cercueil Subject: Re: [PATCH v2 12/12] Documentation: iio: Document high-speed DMABUF based API To: Daniel Vetter Cc: Jonathan Cameron , Jonathan Lemon , Michael Hennerich , Jonathan Corbet , linux-iio@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Alexandru Ardelean , Christian =?iso-8859-1?b?S/ZuaWc=?= Message-Id: In-Reply-To: References: <20220207125933.81634-1-paul@crapouillou.net> <20220207130140.81891-1-paul@crapouillou.net> <20220207130140.81891-2-paul@crapouillou.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_PASS,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Le mar., mars 29 2022 at 16:07:21 +0200, Daniel Vetter=20 a =E9crit : > On Tue, Mar 29, 2022 at 10:47:23AM +0100, Paul Cercueil wrote: >> Hi Daniel, >>=20 >> Le mar., mars 29 2022 at 10:54:43 +0200, Daniel Vetter=20 >> a >> =E9crit : >> > On Mon, Feb 07, 2022 at 01:01:40PM +0000, Paul Cercueil wrote: >> > > Document the new DMABUF based API. >> > > >> > > v2: - Explicitly state that the new interface is optional and=20 >> is >> > > not implemented by all drivers. >> > > - The IOCTLs can now only be called on the buffer FD=20 >> returned by >> > > IIO_BUFFER_GET_FD_IOCTL. >> > > - Move the page up a bit in the index since it is core=20 >> stuff >> > > and not >> > > driver-specific. >> > > >> > > Signed-off-by: Paul Cercueil >> > > --- >> > > Documentation/driver-api/dma-buf.rst | 2 + >> > > Documentation/iio/dmabuf_api.rst | 94 >> > > ++++++++++++++++++++++++++++ >> > > Documentation/iio/index.rst | 2 + >> > > 3 files changed, 98 insertions(+) >> > > create mode 100644 Documentation/iio/dmabuf_api.rst >> > > >> > > diff --git a/Documentation/driver-api/dma-buf.rst >> > > b/Documentation/driver-api/dma-buf.rst >> > > index 2cd7db82d9fe..d3c9b58d2706 100644 >> > > --- a/Documentation/driver-api/dma-buf.rst >> > > +++ b/Documentation/driver-api/dma-buf.rst >> > > @@ -1,3 +1,5 @@ >> > > +.. _dma-buf: >> > > + >> > > Buffer Sharing and Synchronization >> > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> > > >> > > diff --git a/Documentation/iio/dmabuf_api.rst >> > > b/Documentation/iio/dmabuf_api.rst >> > > new file mode 100644 >> > > index 000000000000..43bb2c1b9fdc >> > > --- /dev/null >> > > +++ b/Documentation/iio/dmabuf_api.rst >> > > @@ -0,0 +1,94 @@ >> > > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> > > +High-speed DMABUF interface for IIO >> > > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> > > + >> > > +1. Overview >> > > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> > > + >> > > +The Industrial I/O subsystem supports access to buffers=20 >> through a >> > > file-based >> > > +interface, with read() and write() access calls through the=20 >> IIO >> > > device's dev >> > > +node. >> > > + >> > > +It additionally supports a DMABUF based interface, where the >> > > userspace >> > > +application can allocate and append DMABUF objects to the=20 >> buffer's >> > > queue. >> > > +This interface is however optional and is not available in all >> > > drivers. >> > > + >> > > +The advantage of this DMABUF based interface vs. the read() >> > > +interface, is that it avoids an extra copy of the data=20 >> between the >> > > +kernel and userspace. This is particularly useful for=20 >> high-speed >> > > +devices which produce several megabytes or even gigabytes of=20 >> data >> > > per >> > > +second. >> > > + >> > > +The data in this DMABUF interface is managed at the=20 >> granularity of >> > > +DMABUF objects. Reducing the granularity from byte level to=20 >> block >> > > level >> > > +is done to reduce the userspace-kernelspace synchronization >> > > overhead >> > > +since performing syscalls for each byte at a few Mbps is just=20 >> not >> > > +feasible. >> > > + >> > > +This of course leads to a slightly increased latency. For this >> > > reason an >> > > +application can choose the size of the DMABUFs as well as how=20 >> many >> > > it >> > > +allocates. E.g. two DMABUFs would be a traditional double=20 >> buffering >> > > +scheme. But using a higher number might be necessary to avoid >> > > +underflow/overflow situations in the presence of scheduling >> > > latencies. >> > >> > So this reads a lot like reinventing io-uring with pre-registered >> > O_DIRECT >> > memory ranges. Except it's using dma-buf and hand-rolling a lot of >> > pieces >> > instead of io-uring and O_DIRECT. >>=20 >> I don't see how io_uring would help us. It's an async I/O=20 >> framework, does it >> allow us to access a kernel buffer without copying the data? Does=20 >> it allow >> us to zero-copy the data to a network interface? >=20 > With networking, do you mean rdma, or some other kind of networking? > Anything else than rdma doesn't support dma-buf, and I don't think it=20 > will > likely ever do so. Similar it's really tricky to glue dma-buf support=20 > into > the block layer. By networking I mean standard sockets. If I'm not mistaken, Jonathan=20 Lemon's work on zctap was to add dma-buf import/export support to=20 standard sockets. > Wrt io_uring, yes it's async, but that's not the point. The point is=20 > that > with io_uring you pre-register ranges for reads and writes to target, > which in combination with O_DIRECT, makes it effectively (and=20 > efficient!) > zero-copy. Plus it has full integration with both networking and=20 > normal > file io, which dma-buf just doesn't have. >=20 > Like you _cannot_ do zero copy from a dma-buf into a normal file. You > absolutely can do the same with io_uring. I believe io_uring does zero-copy the same way as splice(), by=20 duplicating/moving pages? Because that wouldn't work with DMA coherent=20 memory, which is contiguous and not backed by pages. >> > At least if the entire justification for dma-buf support is=20 >> zero-copy >> > support between the driver and userspace it's _really_ not the=20 >> right >> > tool >> > for the job. dma-buf is for zero-copy between devices, with cpu=20 >> access >> > from userpace (or kernel fwiw) being very much the exception (and=20 >> often >> > flat-out not supported at all). >>=20 >> We want both. Using dma-bufs for the driver/userspace interface is a >> convenience as we then have a unique API instead of two distinct=20 >> ones. >>=20 >> Why should CPU access from userspace be the exception? It works=20 >> fine for IIO >> dma-bufs. You keep warning about this being a terrible design, but=20 >> I simply >> don't see it. >=20 > It depends really on what you're trying to do, and there's extremely=20 > high > chances it will simply not work. Well it does work though. The userspace interface is stupidly simple=20 here - one dma-buf, backed by DMA coherent memory, is enqueued for=20 processing by the DMA. The userspace calling the "sync" ioctl on the=20 dma-buf will block until the transfer is complete, and then userspace=20 can access it again. > Unless you want to do zero copy with a gpu, or something which is in=20 > that > ecosystem of accelerators and devices, then dma-buf is probably not=20 > what > you're looking for. > -Daniel I want to do zero-copy between a IIO device and the network/USB, and=20 right now there is absolutely nothing in place that allows me to do=20 that. So I have to get creative. Cheers, -Paul >>=20 >> > > + >> > > +2. User API >> > > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> > > + >> > > +``IIO_BUFFER_DMABUF_ALLOC_IOCTL(struct iio_dmabuf_alloc_req=20 >> *)`` >> > > =20 >> +---------------------------------------------------------------- >> > > + >> > > +Each call will allocate a new DMABUF object. The return value=20 >> (if >> > > not >> > > +a negative errno value as error) will be the file descriptor=20 >> of >> > > the new >> > > +DMABUF. >> > > + >> > > +``IIO_BUFFER_DMABUF_ENQUEUE_IOCTL(struct iio_dmabuf *)`` >> > > +-------------------------------------------------------- >> > > + >> > > +Place the DMABUF object into the queue pending for hardware >> > > process. >> > > + >> > > +These two IOCTLs have to be performed on the IIO buffer's file >> > > +descriptor, obtained using the `IIO_BUFFER_GET_FD_IOCTL`=20 >> ioctl. >> > > + >> > > +3. Usage >> > > +=3D=3D=3D=3D=3D=3D=3D=3D >> > > + >> > > +To access the data stored in a block by userspace the block=20 >> must be >> > > +mapped to the process's memory. This is done by calling=20 >> mmap() on >> > > the >> > > +DMABUF's file descriptor. >> > > + >> > > +Before accessing the data through the map, you must use the >> > > +DMA_BUF_IOCTL_SYNC(struct dma_buf_sync *) ioctl, with the >> > > +DMA_BUF_SYNC_START flag, to make sure that the data is=20 >> available. >> > > +This call may block until the hardware is done with this=20 >> block. >> > > Once >> > > +you are done reading or writing the data, you must use this=20 >> ioctl >> > > again >> > > +with the DMA_BUF_SYNC_END flag, before enqueueing the DMABUF=20 >> to the >> > > +kernel's queue. >> > > + >> > > +If you need to know when the hardware is done with a DMABUF,=20 >> you >> > > can >> > > +poll its file descriptor for the EPOLLOUT event. >> > > + >> > > +Finally, to destroy a DMABUF object, simply call close() on=20 >> its >> > > file >> > > +descriptor. >> > > + >> > > +For more information about manipulating DMABUF objects, see: >> > > :ref:`dma-buf`. >> > > + >> > > +A typical workflow for the new interface is: >> > > + >> > > + for block in blocks: >> > > + DMABUF_ALLOC block >> > > + mmap block >> > > + >> > > + enable buffer >> > > + >> > > + while !done >> > > + for block in blocks: >> > > + DMABUF_ENQUEUE block >> > > + >> > > + DMABUF_SYNC_START block >> > > + process data >> > > + DMABUF_SYNC_END block >> > > + >> > > + disable buffer >> > > + >> > > + for block in blocks: >> > > + close block >> > > diff --git a/Documentation/iio/index.rst >> > > b/Documentation/iio/index.rst >> > > index 58b7a4ebac51..669deb67ddee 100644 >> > > --- a/Documentation/iio/index.rst >> > > +++ b/Documentation/iio/index.rst >> > > @@ -9,4 +9,6 @@ Industrial I/O >> > > >> > > iio_configfs >> > > >> > > + dmabuf_api >> > > + >> > > ep93xx_adc >> > > -- >> > > 2.34.1 >> > > >> > >> > -- >> > Daniel Vetter >> > Software Engineer, Intel Corporation >> > http://blog.ffwll.ch >>=20 >>=20 >=20 > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch