Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp246454rdb; Thu, 21 Dec 2023 08:05:29 -0800 (PST) X-Google-Smtp-Source: AGHT+IFE17nEmdOjD8a2mPuXT0Wg2pePmoDn28NswbpfwGkxEpbktEubcBD4ogQ+kKQ87PPVY6LG X-Received: by 2002:a2e:740e:0:b0:2cc:9279:3516 with SMTP id p14-20020a2e740e000000b002cc92793516mr1456573ljc.32.1703174724077; Thu, 21 Dec 2023 08:05:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703174724; cv=none; d=google.com; s=arc-20160816; b=JyDGASKQ7bkTDXI3Nd9DODi7xelZw1AaYJDPv/CKY8i3pUn7rHFnA4Qd1VZi3l1S8G nppE6f4hmPTQ7b4DhXuwEFxMo0ldoxrIphngfqiBZa8aZbxj/utx/v2znzpigQRl/KIk ERDIAvgKP8XxjutTZQn4eBh7NSAvFKz+h7pdbJb/T/dLF1ZrseoTsG17j2UiTKWF6mpG O54mViCAP6HMiQlvnqItmf+rNsRUKot0BewK8kaIPtX/ZmePs1HH663spHNrzdzQ2s3a oknQEp/WQZPTYFAu7nHfbNbvEvo8xy+NEzU2g4WDa0mD9i6GRHQCXuSEPT+OEpyADDo2 3W2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :subject:cc:to:from:date:dkim-signature; bh=Um+JemQUa1i58yNBazUwZp2P7hdfzhTtJGj0QVcUdjs=; fh=WVp0xvjyzHxwiLMPTCrB9Hw30PrcT/SNQ5z/y+ClnXk=; b=Hz4ajX36v74rc5VqVoeAUFSjqYIwtzfDsjIGKty6mKMB78hCiEIQyr3V1OA6iBhOLf yop11PUB6VY3JvEfvS3E2xgc7MBIh2NqNDtJspFwsNLAx0S2j9/1ZzmYWoK/ah4pSTIl XLV33uottZLzMqWcQhNo/atJ6/uUBtldEKM/1ewNbfuhaRe9+bFeOUmGv2cnktHHANWA QdsfD7ouCzddNH7vyMKx9FctE0KOqsDzvSzE/ECxdut2+g+C6jaLLvTb0/tHGWxpWeST h9XE6Bdb9gbQVhYceEwpGOdPEhjtedXLn1BVYI1x/YopUA0ki4hiPQ86rQr9k677NKUX Po2g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="LQ/dRIxy"; spf=pass (google.com: domain of linux-kernel+bounces-8720-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-8720-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id df6-20020a05640230a600b005543b6f1a12si235002edb.506.2023.12.21.08.05.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Dec 2023 08:05:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-8720-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="LQ/dRIxy"; spf=pass (google.com: domain of linux-kernel+bounces-8720-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-8720-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 824BE1F23408 for ; Thu, 21 Dec 2023 16:05:18 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A602155E59; Thu, 21 Dec 2023 16:05:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LQ/dRIxy" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C613155E41; Thu, 21 Dec 2023 16:05:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0F4F5C433C7; Thu, 21 Dec 2023 16:04:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703174702; bh=XNhUqOfjLcvVCPjO3xJPt6Xx/mZDgeiWV3XsOhPh4Vk=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=LQ/dRIxyEykhaBg/e9Ihe5IbyYWnpkkkGKyOLTU93BFsq8+5ZOAgdmSAG9vOxvqWV kbc91kyOf+yyqvdR8roJQsivmoJq5JDH0nuBrkihYsQTIlHxseXIA5Ta0vZn79LUmY Gft3yaWQlqRKG7FnOpXXvKpZCq69Sm6Ft3vBVC17cFjXxpGehRofszd/tPmrEoPQx+ gAIkXrdSDB3l8KZzS6mRXgQmKr/G9g1RRaAo91WbMrvRHXNMRDP32mj+Uta6RlQG/v yVzKRCqmgD9dc0JY4lHihSlQM0tqp3giTf/O0b4f3/T7NGM19qqVwZ42SX/78Hnh8v saLEUlh2E8xyw== Date: Thu, 21 Dec 2023 16:04:45 +0000 From: Jonathan Cameron To: Paul Cercueil Cc: Lars-Peter Clausen , Sumit Semwal , Christian =?UTF-8?B?S8O2bmln?= , Vinod Koul , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, linux-iio@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Nuno =?UTF-8?B?U8Oh?= , Michael Hennerich Subject: Re: [PATCH v5 6/8] iio: buffer-dma: Enable support for DMABUFs Message-ID: <20231221160445.0e3e5a8c@jic23-huawei> In-Reply-To: <20231219175009.65482-7-paul@crapouillou.net> References: <20231219175009.65482-1-paul@crapouillou.net> <20231219175009.65482-7-paul@crapouillou.net> X-Mailer: Claws Mail 4.2.0 (GTK 3.24.38; x86_64-pc-linux-gnu) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit On Tue, 19 Dec 2023 18:50:07 +0100 Paul Cercueil wrote: > Implement iio_dma_buffer_attach_dmabuf(), iio_dma_buffer_detach_dmabuf() > and iio_dma_buffer_transfer_dmabuf(), which can then be used by the IIO > DMA buffer implementations. > > Signed-off-by: Paul Cercueil > Hi Paul, A few comments in here. Mostly places where the cleanup.h guard() stuff can simplify error paths. Overall this looks reasonable to me. Jonathan > --- > v3: Update code to provide the functions that will be used as callbacks > for the new IOCTLs. > --- > drivers/iio/buffer/industrialio-buffer-dma.c | 157 +++++++++++++++++-- > include/linux/iio/buffer-dma.h | 26 +++ > 2 files changed, 170 insertions(+), 13 deletions(-) > > diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c > index 5610ba67925e..adb64f975064 100644 > --- a/drivers/iio/buffer/industrialio-buffer-dma.c > +++ b/drivers/iio/buffer/industrialio-buffer-dma.c > @@ -14,6 +14,7 @@ > #include > #include > #include > +#include > #include > #include > > @@ -94,14 +95,24 @@ static void iio_buffer_block_release(struct kref *kref) > { > struct iio_dma_buffer_block *block = container_of(kref, > struct iio_dma_buffer_block, kref); > + struct iio_dma_buffer_queue *queue = block->queue; > > - WARN_ON(block->state != IIO_BLOCK_STATE_DEAD); > + WARN_ON(block->fileio && block->state != IIO_BLOCK_STATE_DEAD); > > - dma_free_coherent(block->queue->dev, PAGE_ALIGN(block->size), > - block->vaddr, block->phys_addr); > + mutex_lock(&queue->lock); > > - iio_buffer_put(&block->queue->buffer); > + if (block->fileio) { > + dma_free_coherent(queue->dev, PAGE_ALIGN(block->size), > + block->vaddr, block->phys_addr); > + queue->num_fileio_blocks--; > + } > + > + queue->num_blocks--; > kfree(block); > + > + mutex_unlock(&queue->lock); > + > + iio_buffer_put(&queue->buffer); > } > > static void iio_buffer_block_get(struct iio_dma_buffer_block *block) > @@ -163,7 +174,7 @@ static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer *buf) > } > > static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( > - struct iio_dma_buffer_queue *queue, size_t size) > + struct iio_dma_buffer_queue *queue, size_t size, bool fileio) > { > struct iio_dma_buffer_block *block; > > @@ -171,13 +182,16 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( > if (!block) > return NULL; > > - block->vaddr = dma_alloc_coherent(queue->dev, PAGE_ALIGN(size), > - &block->phys_addr, GFP_KERNEL); > - if (!block->vaddr) { > - kfree(block); > - return NULL; > + if (fileio) { > + block->vaddr = dma_alloc_coherent(queue->dev, PAGE_ALIGN(size), > + &block->phys_addr, GFP_KERNEL); > + if (!block->vaddr) { > + kfree(block); > + return NULL; > + } > } > > + block->fileio = fileio; > block->size = size; > block->state = IIO_BLOCK_STATE_DONE; > block->queue = queue; > @@ -186,6 +200,9 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( > > iio_buffer_get(&queue->buffer); > > + queue->num_blocks++; > + queue->num_fileio_blocks += fileio; Adding a boolean is non intuitive. if (fileio) queue->num_fileio_blocks++; probably easier to read and compiler should be able to figure out how to optimise the condition away. > + > return block; > } > > @@ -211,6 +228,9 @@ void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block) > _iio_dma_buffer_block_done(block); > spin_unlock_irqrestore(&queue->list_lock, flags); > > + if (!block->fileio) > + iio_buffer_signal_dmabuf_done(block->attach, 0); > + > iio_buffer_block_put_atomic(block); > wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM); > } > @@ -237,10 +257,14 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue, > list_del(&block->head); > block->bytes_used = 0; > _iio_dma_buffer_block_done(block); > + > + if (!block->fileio) > + iio_buffer_signal_dmabuf_done(block->attach, -EINTR); > iio_buffer_block_put_atomic(block); > } > spin_unlock_irqrestore(&queue->list_lock, flags); > > + queue->fileio.enabled = false; While this obviously doesn't need to be conditional if it can already be false it might be easier to follow the code flow it if didn't check if we were doing fileio or not before disabling it. > wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM); > } > EXPORT_SYMBOL_GPL(iio_dma_buffer_block_list_abort); > @@ -261,6 +285,12 @@ static bool iio_dma_block_reusable(struct iio_dma_buffer_block *block) > } > } > > +static bool iio_dma_buffer_fileio_mode(struct iio_dma_buffer_queue *queue) > +{ > + return queue->fileio.enabled || > + queue->num_blocks == queue->num_fileio_blocks; This is a little odd. So would be good have a comment on why this condition. Or rename the function to imply it's checking if enabled, or can be enabled. At first glanced I expected a function with this name to just be an accessor function. e.g. return queue->fileio.enabled; > +} > + > /** > * iio_dma_buffer_request_update() - DMA buffer request_update callback > * @buffer: The buffer which to request an update > @@ -287,6 +317,12 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) > > mutex_lock(&queue->lock); > > + queue->fileio.enabled = iio_dma_buffer_fileio_mode(queue); > + > + /* If DMABUFs were created, disable fileio interface */ > + if (!queue->fileio.enabled) > + goto out_unlock; > + > /* Allocations are page aligned */ > if (PAGE_ALIGN(queue->fileio.block_size) == PAGE_ALIGN(size)) > try_reuse = true; > @@ -317,7 +353,7 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) > block = queue->fileio.blocks[i]; > if (block->state == IIO_BLOCK_STATE_DEAD) { > /* Could not reuse it */ > - iio_buffer_block_put(block); > + iio_buffer_block_put_atomic(block); > block = NULL; > } else { > block->size = size; > @@ -327,7 +363,7 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) > } > > if (!block) { > - block = iio_dma_buffer_alloc_block(queue, size); > + block = iio_dma_buffer_alloc_block(queue, size, true); > if (!block) { > ret = -ENOMEM; > goto out_unlock; > @@ -363,7 +399,7 @@ static void iio_dma_buffer_fileio_free(struct iio_dma_buffer_queue *queue) > for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { > if (!queue->fileio.blocks[i]) > continue; > - iio_buffer_block_put(queue->fileio.blocks[i]); > + iio_buffer_block_put_atomic(queue->fileio.blocks[i]); For these cases that are atomic or not, it might be worth calling out why they need to be atomic. > queue->fileio.blocks[i] = NULL; > } > queue->fileio.active_block = NULL; > @@ -384,8 +420,12 @@ static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue, > > block->state = IIO_BLOCK_STATE_ACTIVE; > iio_buffer_block_get(block); > + > ret = queue->ops->submit(queue, block); > if (ret) { > + if (!block->fileio) > + iio_buffer_signal_dmabuf_done(block->attach, ret); > + > /* > * This is a bit of a problem and there is not much we can do > * other then wait for the buffer to be disabled and re-enabled > @@ -588,6 +628,97 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf) > } > EXPORT_SYMBOL_GPL(iio_dma_buffer_data_available); > > +struct iio_dma_buffer_block * > +iio_dma_buffer_attach_dmabuf(struct iio_buffer *buffer, > + struct dma_buf_attachment *attach) > +{ > + struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer); > + struct iio_dma_buffer_block *block; > + int err; > + > + mutex_lock(&queue->lock); guard(mutex)(&queue->lock); > + > + /* > + * If the buffer is enabled and in fileio mode new blocks can't be > + * allocated. > + */ > + if (queue->fileio.enabled) { > + err = -EBUSY; return ERR_PTR(-EBUSY); > + goto err_unlock; > + } > + > + block = iio_dma_buffer_alloc_block(queue, attach->dmabuf->size, false); > + if (!block) { > + err = -ENOMEM; return > + goto err_unlock; > + } > + > + block->attach = attach; > + > + /* Free memory that might be in use for fileio mode */ > + iio_dma_buffer_fileio_free(queue); > + > + mutex_unlock(&queue->lock); Drop this as unneeded if you use guard() > + > + return block; > + > +err_unlock: > + mutex_unlock(&queue->lock); > + return ERR_PTR(err); > +} > +EXPORT_SYMBOL_GPL(iio_dma_buffer_attach_dmabuf); > +static int iio_dma_can_enqueue_block(struct iio_dma_buffer_block *block) > +{ > + struct iio_dma_buffer_queue *queue = block->queue; > + > + /* If in fileio mode buffers can't be enqueued. */ > + if (queue->fileio.enabled) > + return -EBUSY; > + > + switch (block->state) { > + case IIO_BLOCK_STATE_QUEUED: > + return -EPERM; > + case IIO_BLOCK_STATE_DONE: > + return 0; > + default: > + return -EBUSY; Is this a real condition or just avoiding a compile warning? If it's real I'd like the various states that lead to it be listed here just so we can more easily understand why -EBUSY is appropriate. > + } > +} > + > +int iio_dma_buffer_enqueue_dmabuf(struct iio_buffer *buffer, > + struct iio_dma_buffer_block *block, > + struct sg_table *sgt, > + size_t size, bool cyclic) > +{ > + struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer); > + int ret = 0; No need to init as it's always set. > + > + mutex_lock(&queue->lock); guard(mutex)(&queue->lock); Then can do direct returns on error and not bother with the manual unlock. > + ret = iio_dma_can_enqueue_block(block); > + if (ret < 0) > + goto out_mutex_unlock; > + > + block->bytes_used = size; > + block->cyclic = cyclic; > + block->sg_table = sgt; > + > + iio_dma_buffer_enqueue(queue, block); > + > +out_mutex_unlock: > + mutex_unlock(&queue->lock); > + return ret; > +} > +EXPORT_SYMBOL_GPL(iio_dma_buffer_enqueue_dmabuf);