Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp5573839pxb; Mon, 28 Mar 2022 14:25:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxP05ia1FYNIXUCOHm4Lg6c7sTNLlti8IAHeHM8SlidIvR0f76tbC3MZ9Zw/wRbc+phrmOz X-Received: by 2002:a05:6102:558a:b0:325:8319:cfea with SMTP id dc10-20020a056102558a00b003258319cfeamr7495782vsb.67.1648502705193; Mon, 28 Mar 2022 14:25:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1648502705; cv=none; d=google.com; s=arc-20160816; b=FwLyAPN2IJKUBXNiE8AJbr2EVzfNDsMEhUP4BmVj3Pxxwvt8h3K2ReWKCxStIeZjLJ zUaw0yhatyYAEvWvUZC1pzjlRnx5eDV1CZbfgpGRfqLRkZHMxWi/dFTo+80rFRSDL4kY hM8uQQSksLQJlNgbe6AoOLi5igntzZDQy5FXYnKHV0ns7XTGFNVIEKMAZ5LbXNOzmYUr HUSoIhxOZCIPu8z4WWAfWVsmzFe+JfLQ2jPaz2cQ8VeDEU/0CMO0JzovdCBVeET9ZFDL dFfuOjiLRNSKbFr6oWCCK3VLZE54mLT/wMmexccjOh3K1hLhBu2Ybt+WNdc1UpzY2XY2 yJEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=IVU4EaXquZ4po4jXCX30WYY+wRoFtzNPVUTFcmuzWRk=; b=w1k1/ZpUaKk0dTAxiiEf6SCN3F1Mb40A1h3owhHrPOgX5lbdpWFYBrXiSbrFK7x+Ws gYMoogek+iazJdgwaxfYxYNZn27dsYs80i14OG+/OgQRDIEwOAWUWOh7PUbYfeHT947L /+msrEzGFHqaii8PcoumQChpxB0Wltiu8AZ76CkCECXMRXB6vxcrRp1nXzDmRAbfiNvr eWt4acrQCk8o49QfVzw17mVY9NTWIsjULb0or2tBbt6eW+53BsVTfuMGw3ihci39OnKJ tlhh++WCy1XpkLdUFVK9U4bKRXjheqYwdXdHlGMs9t5S5uEyLa6EnsqFn0Vd2jsXrkJ5 G6vg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=BcFg4XTS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id u13-20020a67ae4d000000b00324c5c3bf9fsi3176086vsh.481.2022.03.28.14.25.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Mar 2022 14:25:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=BcFg4XTS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5B42530F48; Mon, 28 Mar 2022 14:09:33 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244595AbiC1RbP (ORCPT + 99 others); Mon, 28 Mar 2022 13:31:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241319AbiC1RbN (ORCPT ); Mon, 28 Mar 2022 13:31:13 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C56722B15; Mon, 28 Mar 2022 10:29:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D2AA0B80DFD; Mon, 28 Mar 2022 17:29:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18328C340F0; Mon, 28 Mar 2022 17:29:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1648488569; bh=QIRBqAVby7hWkSmWlUKvvjjcLwJ5RrXgnyxLcGvT3lI=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=BcFg4XTS+AIRVPKYDBBFs4bMgJRLHz1JGvH2Ws/yvROjv0EDJZ1z0+tRBeUejRsTL 5CL9zPotjHrG2w9L+QwCKc8reZZhIA0k6E4x8cF/16BXChYwm+SlKHphCGvvkdgsZy 51v1L4F1g2l2g3qoDOqniGKO6T+75cDuXWFLWOY5oEqX8YE6TElcKC9Rz29FuHW/gC nUo01GQlYSmGTudlHghLcWdAL+VmgzrrQl3InoIsnoUnPeswkLVLHJQsgmtJIybuGI 9ztGHE+uYpJRVQsmHjfyBYTOFn9KPRaZYm768BWLtwJtqJlX9fRUF6AwhligIYXJgh jJU/mi335lpEg== Date: Mon, 28 Mar 2022 18:37:01 +0100 From: Jonathan Cameron To: Paul Cercueil Cc: Michael Hennerich , Lars-Peter Clausen , Christian =?UTF-8?B?S8O2bmln?= , Sumit Semwal , Jonathan Corbet , Alexandru Ardelean , dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-iio@vger.kernel.org Subject: Re: [PATCH v2 05/12] iio: core: Add new DMABUF interface infrastructure Message-ID: <20220328183701.02884cc3@jic23-huawei> In-Reply-To: <20220207125933.81634-6-paul@crapouillou.net> References: <20220207125933.81634-1-paul@crapouillou.net> <20220207125933.81634-6-paul@crapouillou.net> X-Mailer: Claws Mail 4.0.0 (GTK+ 3.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 7 Feb 2022 12:59:26 +0000 Paul Cercueil wrote: > Add the necessary infrastructure to the IIO core to support a new > optional DMABUF based interface. > > The advantage of this new DMABUF based interface vs. the read() > interface, is that it avoids an extra copy of the data between the > kernel and userspace. This is particularly userful for high-speed useful > devices which produce several megabytes or even gigabytes of data per > second. > > The data in this new DMABUF interface is managed at the granularity of > DMABUF objects. Reducing the granularity from byte level to block level > is done to reduce the userspace-kernelspace synchronization overhead > since performing syscalls for each byte at a few Mbps is just not > feasible. > > This of course leads to a slightly increased latency. For this reason an > application can choose the size of the DMABUFs as well as how many it > allocates. E.g. two DMABUFs would be a traditional double buffering > scheme. But using a higher number might be necessary to avoid > underflow/overflow situations in the presence of scheduling latencies. > > As part of the interface, 2 new IOCTLs have been added: > > IIO_BUFFER_DMABUF_ALLOC_IOCTL(struct iio_dmabuf_alloc_req *): > Each call will allocate a new DMABUF object. The return value (if not > a negative errno value as error) will be the file descriptor of the new > DMABUF. > > IIO_BUFFER_DMABUF_ENQUEUE_IOCTL(struct iio_dmabuf *): > Place the DMABUF object into the queue pending for hardware process. > > These two IOCTLs have to be performed on the IIO buffer's file > descriptor, obtained using the IIO_BUFFER_GET_FD_IOCTL() ioctl. Just to check, do they work on the old deprecated chardev route? Normally we can directly access the first buffer without the ioctl. > > To access the data stored in a block by userspace the block must be > mapped to the process's memory. This is done by calling mmap() on the > DMABUF's file descriptor. > > Before accessing the data through the map, you must use the > DMA_BUF_IOCTL_SYNC(struct dma_buf_sync *) ioctl, with the > DMA_BUF_SYNC_START flag, to make sure that the data is available. > This call may block until the hardware is done with this block. Once > you are done reading or writing the data, you must use this ioctl again > with the DMA_BUF_SYNC_END flag, before enqueueing the DMABUF to the > kernel's queue. > > If you need to know when the hardware is done with a DMABUF, you can > poll its file descriptor for the EPOLLOUT event. > > Finally, to destroy a DMABUF object, simply call close() on its file > descriptor. > > A typical workflow for the new interface is: > > for block in blocks: > DMABUF_ALLOC block > mmap block > > enable buffer > > while !done > for block in blocks: > DMABUF_ENQUEUE block > > DMABUF_SYNC_START block > process data > DMABUF_SYNC_END block > > disable buffer > > for block in blocks: > close block Given my very limited knowledge of dma-buf, I'll leave commenting on the flow to others who know if this looks 'standards' or not ;) Code looks sane to me.. > > v2: Only allow the new IOCTLs on the buffer FD created with > IIO_BUFFER_GET_FD_IOCTL(). > > Signed-off-by: Paul Cercueil > --- > drivers/iio/industrialio-buffer.c | 55 +++++++++++++++++++++++++++++++ > include/linux/iio/buffer_impl.h | 8 +++++ > include/uapi/linux/iio/buffer.h | 29 ++++++++++++++++ > 3 files changed, 92 insertions(+) > > diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c > index 94eb9f6cf128..72f333a519bc 100644 > --- a/drivers/iio/industrialio-buffer.c > +++ b/drivers/iio/industrialio-buffer.c > @@ -17,6 +17,7 @@ > #include > #include > #include > +#include > #include > #include > > @@ -1520,11 +1521,65 @@ static int iio_buffer_chrdev_release(struct inode *inode, struct file *filep) > return 0; > } > > +static int iio_buffer_enqueue_dmabuf(struct iio_buffer *buffer, > + struct iio_dmabuf __user *user_buf) > +{ > + struct iio_dmabuf dmabuf; > + > + if (!buffer->access->enqueue_dmabuf) > + return -EPERM; > + > + if (copy_from_user(&dmabuf, user_buf, sizeof(dmabuf))) > + return -EFAULT; > + > + if (dmabuf.flags & ~IIO_BUFFER_DMABUF_SUPPORTED_FLAGS) > + return -EINVAL; > + > + return buffer->access->enqueue_dmabuf(buffer, &dmabuf); > +} > + > +static int iio_buffer_alloc_dmabuf(struct iio_buffer *buffer, > + struct iio_dmabuf_alloc_req __user *user_req) > +{ > + struct iio_dmabuf_alloc_req req; > + > + if (!buffer->access->alloc_dmabuf) > + return -EPERM; > + > + if (copy_from_user(&req, user_req, sizeof(req))) > + return -EFAULT; > + > + if (req.resv) > + return -EINVAL; > + > + return buffer->access->alloc_dmabuf(buffer, &req); > +} > + > +static long iio_buffer_chrdev_ioctl(struct file *filp, > + unsigned int cmd, unsigned long arg) > +{ > + struct iio_dev_buffer_pair *ib = filp->private_data; > + struct iio_buffer *buffer = ib->buffer; > + void __user *_arg = (void __user *)arg; > + > + switch (cmd) { > + case IIO_BUFFER_DMABUF_ALLOC_IOCTL: > + return iio_buffer_alloc_dmabuf(buffer, _arg); > + case IIO_BUFFER_DMABUF_ENQUEUE_IOCTL: > + /* TODO: support non-blocking enqueue operation */ > + return iio_buffer_enqueue_dmabuf(buffer, _arg); > + default: > + return IIO_IOCTL_UNHANDLED; > + } > +} > + > static const struct file_operations iio_buffer_chrdev_fileops = { > .owner = THIS_MODULE, > .llseek = noop_llseek, > .read = iio_buffer_read, > .write = iio_buffer_write, > + .unlocked_ioctl = iio_buffer_chrdev_ioctl, > + .compat_ioctl = compat_ptr_ioctl, > .poll = iio_buffer_poll, > .release = iio_buffer_chrdev_release, > }; > diff --git a/include/linux/iio/buffer_impl.h b/include/linux/iio/buffer_impl.h > index e2ca8ea23e19..728541bc2c63 100644 > --- a/include/linux/iio/buffer_impl.h > +++ b/include/linux/iio/buffer_impl.h > @@ -39,6 +39,9 @@ struct iio_buffer; > * device stops sampling. Calles are balanced with @enable. > * @release: called when the last reference to the buffer is dropped, > * should free all resources allocated by the buffer. > + * @alloc_dmabuf: called from userspace via ioctl to allocate one DMABUF. > + * @enqueue_dmabuf: called from userspace via ioctl to queue this DMABUF > + * object to this buffer. Requires a valid DMABUF fd. > * @modes: Supported operating modes by this buffer type > * @flags: A bitmask combination of INDIO_BUFFER_FLAG_* > * > @@ -68,6 +71,11 @@ struct iio_buffer_access_funcs { > > void (*release)(struct iio_buffer *buffer); > > + int (*alloc_dmabuf)(struct iio_buffer *buffer, > + struct iio_dmabuf_alloc_req *req); > + int (*enqueue_dmabuf)(struct iio_buffer *buffer, > + struct iio_dmabuf *block); > + > unsigned int modes; > unsigned int flags; > }; > diff --git a/include/uapi/linux/iio/buffer.h b/include/uapi/linux/iio/buffer.h > index 13939032b3f6..e4621b926262 100644 > --- a/include/uapi/linux/iio/buffer.h > +++ b/include/uapi/linux/iio/buffer.h > @@ -5,6 +5,35 @@ > #ifndef _UAPI_IIO_BUFFER_H_ > #define _UAPI_IIO_BUFFER_H_ > > +#include > + > +#define IIO_BUFFER_DMABUF_SUPPORTED_FLAGS 0x00000000 > + > +/** > + * struct iio_dmabuf_alloc_req - Descriptor for allocating IIO DMABUFs > + * @size: the size of a single DMABUF > + * @resv: reserved > + */ > +struct iio_dmabuf_alloc_req { > + __u64 size; > + __u64 resv; > +}; > + > +/** > + * struct iio_dmabuf - Descriptor for a single IIO DMABUF object > + * @fd: file descriptor of the DMABUF object > + * @flags: one or more IIO_BUFFER_DMABUF_* flags > + * @bytes_used: number of bytes used in this DMABUF for the data transfer. > + * If zero, the full buffer is used. > + */ > +struct iio_dmabuf { > + __u32 fd; > + __u32 flags; > + __u64 bytes_used; > +}; > + > #define IIO_BUFFER_GET_FD_IOCTL _IOWR('i', 0x91, int) > +#define IIO_BUFFER_DMABUF_ALLOC_IOCTL _IOW('i', 0x92, struct iio_dmabuf_alloc_req) > +#define IIO_BUFFER_DMABUF_ENQUEUE_IOCTL _IOW('i', 0x93, struct iio_dmabuf) > > #endif /* _UAPI_IIO_BUFFER_H_ */