Received: by 2002:ab2:3319:0:b0:1ef:7a0f:c32d with SMTP id i25csp702472lqc; Fri, 8 Mar 2024 09:12:37 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVAEJ1RMKJLy/ryBcJHmO6gSaWuZlVm5CyIbaWTljaUfGMDUR5It93xtEiYchi+PBNjYelVPxmmXh90zxzbO0sheh1ROR7Ll1UfhBhzyA== X-Google-Smtp-Source: AGHT+IHVsFxlryfuHTpIVRBucc2errspBneMtRWt4J+FOXUfv9IanFTZTPs7FjygpbEMaN3iqjno X-Received: by 2002:a05:6a00:21c1:b0:6e6:42fa:6f3e with SMTP id t1-20020a056a0021c100b006e642fa6f3emr12131949pfj.13.1709917957658; Fri, 08 Mar 2024 09:12:37 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709917957; cv=pass; d=google.com; s=arc-20160816; b=iW1o1PUOG9DdzmAYlgd5iFOzUyAC9mYRSMNX+b0lRby0cdttuxrxIMAUdnKBZTgcAr 4mog5MapJ8/6YQuctcrugbWocJ4QnHCVutW6JgS0uURud+QsiZkbIOReXmGxv7JuZlIj SUThHiKukbFMJwLNAYiySP8cB5ZeK2vx1akY85L+NYApw+oCPfDkXxRND/98PLwzIe2d HMy5nW8XX3s1nr1mi9kJxVM8dGv8R5VLMtb5hCuxaQcugOpvRaimRw1m1wtbZhJSFNrk 2faK8IKxguL3Tz1wWVy0YWFLTNjEGvef2bqms/z7a0l5w7wdm2Jw2Dy+S+mjJYPtb04l oZFQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=I8YC+9r4DDs0kkyc+nNtx1aQt3lbbWMpcd57HM54lH8=; fh=yZS2g+QQKdMrQ3jorRH5H6dPNDM7GRj3yrDB9sYJ3mc=; b=Plf3sY6jEbwReetZ0ECBVnqZwdtOeq5FA9BoDhGVGlYUVjPuuzY9To8VHasyANHGG0 miJpanLIHSh+k9R3LrH1zevITGhCte2NfbnSzXIEnUyG+zxUCP30HERzpHlaaVj6JKiI az2C9QR8q160uqSJZV7Uj+X4uOlHgh3AMaf02whrrzGU8BYaMbi5v+TWhEnxLj+i9NpR +kIUBZjBYthqxf43BLZLcul4+1diXRMAtKvtpSEOrZZKKIr+vpB6R2UnR3WA3QGrC8wW NOKDvoR8YnozZ/ej64xp2kNkYjTVB54tpptDQcHnhbiqIMB7Lb/pKYSeQf9PfEthHzC8 hm6Q==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=qnr+OSLK; arc=pass (i=1 spf=pass spfdomain=crapouillou.net dkim=pass dkdomain=crapouillou.net dmarc=pass fromdomain=crapouillou.net); spf=pass (google.com: domain of linux-kernel+bounces-97343-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-97343-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id i8-20020a63cd08000000b005dc8fe5ef05si16501163pgg.96.2024.03.08.09.12.37 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Mar 2024 09:12:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-97343-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@crapouillou.net header.s=mail header.b=qnr+OSLK; arc=pass (i=1 spf=pass spfdomain=crapouillou.net dkim=pass dkdomain=crapouillou.net dmarc=pass fromdomain=crapouillou.net); spf=pass (google.com: domain of linux-kernel+bounces-97343-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-97343-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=crapouillou.net Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id A42692832DE for ; Fri, 8 Mar 2024 17:02:40 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E83B82377E; Fri, 8 Mar 2024 17:01:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=crapouillou.net header.i=@crapouillou.net header.b="qnr+OSLK" Received: from aposti.net (aposti.net [89.234.176.197]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C273210FA; Fri, 8 Mar 2024 17:01:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=89.234.176.197 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709917311; cv=none; b=MusoDDnGDJQpAk11YmfRih61DSFqI8VrOCrzP5nhtVlF8iHDtXS4Vwb0nKCovxPxOe3iDjXTnT52eW0exxXiuZQxTkLfza6OviZoFG9FBs14HgarOElnomDL68JmlgID1AQdyNjdQsZKYVuUJD1LliYdYuQHgV0A5k56wOSlIXA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709917311; c=relaxed/simple; bh=TuTlaNTBWfRgsNRKyDeKrXcYxVOE2q7fymrBAwPXuMs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i2xNoxJ41PGFlzNchkLQTJC3nwbf6VZm8dGbXK6igoqwLVl7pNPjXN3mbwyWng4mXEizP0GQ+S73HjEKZEB85HIYQXmqQVHxa8gS5hL0s7AP8hEIP+r1ip/oIz75trda0eIauT7aCE2XvkHupd9UEBN5LZM1LR+WW5gueLnN31E= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=crapouillou.net; spf=pass smtp.mailfrom=crapouillou.net; dkim=pass (1024-bit key) header.d=crapouillou.net header.i=@crapouillou.net header.b=qnr+OSLK; arc=none smtp.client-ip=89.234.176.197 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=crapouillou.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=crapouillou.net DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=crapouillou.net; s=mail; t=1709917271; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=I8YC+9r4DDs0kkyc+nNtx1aQt3lbbWMpcd57HM54lH8=; b=qnr+OSLKvXrr78wwCoKJqYpVRs6aN6HUQev9dYKO/je1F6XcoMhfXCdRMSIbNYicR4OCyl KWGRT3OuzmY1Tho0uEIHfreTuFhHyEXRcXyLBB9n75k3f8uyUsuaLqP4rNvhgK+JDsYpPl u/LxfSyB7DEbRy3cmY9rGIE8yXl65Ao= From: Paul Cercueil To: Jonathan Cameron , =?UTF-8?q?Christian=20K=C3=B6nig?= , Jonathan Corbet , Lars-Peter Clausen , Vinod Koul , Sumit Semwal Cc: Nuno Sa , Michael Hennerich , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, linux-iio@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Paul Cercueil Subject: [PATCH v8 4/6] iio: buffer-dma: Enable support for DMABUFs Date: Fri, 8 Mar 2024 18:00:44 +0100 Message-ID: <20240308170046.92899-5-paul@crapouillou.net> In-Reply-To: <20240308170046.92899-1-paul@crapouillou.net> References: <20240308170046.92899-1-paul@crapouillou.net> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Implement iio_dma_buffer_attach_dmabuf(), iio_dma_buffer_detach_dmabuf() and iio_dma_buffer_transfer_dmabuf(), which can then be used by the IIO DMA buffer implementations. Signed-off-by: Paul Cercueil Signed-off-by: Nuno Sa --- v3: Update code to provide the functions that will be used as callbacks for the new IOCTLs. v6: - Update iio_dma_buffer_enqueue_dmabuf() to take a dma_fence pointer - Pass that dma_fence pointer along to iio_buffer_signal_dmabuf_done() - Add iio_dma_buffer_lock_queue() / iio_dma_buffer_unlock_queue() - Do not lock the queue in iio_dma_buffer_enqueue_dmabuf(). The caller will ensure that it has been locked already. - Replace "int += bool;" by "if (bool) int++;" - Use dma_fence_begin/end_signalling in the dma_fence critical sections - Use one "num_dmabufs" fields instead of one "num_blocks" and one "num_fileio_blocks". Make it an atomic_t, which makes it possible to decrement it atomically in iio_buffer_block_release() without having to lock the queue mutex; and in turn, it means that we don't need to use iio_buffer_block_put_atomic() everywhere to avoid locking the queue mutex twice. - Use cleanup.h guard(mutex) when possible - Explicitely list all states in the switch in iio_dma_can_enqueue_block() - Rename iio_dma_buffer_fileio_mode() to iio_dma_buffer_can_use_fileio(), and add a comment explaining why it cannot race vs. DMABUF. --- drivers/iio/buffer/industrialio-buffer-dma.c | 181 +++++++++++++++++-- include/linux/iio/buffer-dma.h | 31 ++++ 2 files changed, 201 insertions(+), 11 deletions(-) diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c index 5610ba67925e..c0f539af98f9 100644 --- a/drivers/iio/buffer/industrialio-buffer-dma.c +++ b/drivers/iio/buffer/industrialio-buffer-dma.c @@ -4,6 +4,8 @@ * Author: Lars-Peter Clausen */ +#include +#include #include #include #include @@ -14,6 +16,8 @@ #include #include #include +#include +#include #include #include @@ -94,13 +98,18 @@ static void iio_buffer_block_release(struct kref *kref) { struct iio_dma_buffer_block *block = container_of(kref, struct iio_dma_buffer_block, kref); + struct iio_dma_buffer_queue *queue = block->queue; - WARN_ON(block->state != IIO_BLOCK_STATE_DEAD); + WARN_ON(block->fileio && block->state != IIO_BLOCK_STATE_DEAD); - dma_free_coherent(block->queue->dev, PAGE_ALIGN(block->size), - block->vaddr, block->phys_addr); + if (block->fileio) { + dma_free_coherent(queue->dev, PAGE_ALIGN(block->size), + block->vaddr, block->phys_addr); + } else { + atomic_dec(&queue->num_dmabufs); + } - iio_buffer_put(&block->queue->buffer); + iio_buffer_put(&queue->buffer); kfree(block); } @@ -163,7 +172,7 @@ static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer *buf) } static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( - struct iio_dma_buffer_queue *queue, size_t size) + struct iio_dma_buffer_queue *queue, size_t size, bool fileio) { struct iio_dma_buffer_block *block; @@ -171,13 +180,16 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( if (!block) return NULL; - block->vaddr = dma_alloc_coherent(queue->dev, PAGE_ALIGN(size), - &block->phys_addr, GFP_KERNEL); - if (!block->vaddr) { - kfree(block); - return NULL; + if (fileio) { + block->vaddr = dma_alloc_coherent(queue->dev, PAGE_ALIGN(size), + &block->phys_addr, GFP_KERNEL); + if (!block->vaddr) { + kfree(block); + return NULL; + } } + block->fileio = fileio; block->size = size; block->state = IIO_BLOCK_STATE_DONE; block->queue = queue; @@ -186,6 +198,9 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( iio_buffer_get(&queue->buffer); + if (!fileio) + atomic_inc(&queue->num_dmabufs); + return block; } @@ -206,13 +221,21 @@ void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block) { struct iio_dma_buffer_queue *queue = block->queue; unsigned long flags; + bool cookie; + + cookie = dma_fence_begin_signalling(); spin_lock_irqsave(&queue->list_lock, flags); _iio_dma_buffer_block_done(block); spin_unlock_irqrestore(&queue->list_lock, flags); + if (!block->fileio) + iio_buffer_signal_dmabuf_done(block->fence, 0); + iio_buffer_block_put_atomic(block); wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM); + + dma_fence_end_signalling(cookie); } EXPORT_SYMBOL_GPL(iio_dma_buffer_block_done); @@ -231,17 +254,27 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue, { struct iio_dma_buffer_block *block, *_block; unsigned long flags; + bool cookie; + + cookie = dma_fence_begin_signalling(); spin_lock_irqsave(&queue->list_lock, flags); list_for_each_entry_safe(block, _block, list, head) { list_del(&block->head); block->bytes_used = 0; _iio_dma_buffer_block_done(block); + + if (!block->fileio) + iio_buffer_signal_dmabuf_done(block->fence, -EINTR); iio_buffer_block_put_atomic(block); } spin_unlock_irqrestore(&queue->list_lock, flags); + if (queue->fileio.enabled) + queue->fileio.enabled = false; + wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM); + dma_fence_end_signalling(cookie); } EXPORT_SYMBOL_GPL(iio_dma_buffer_block_list_abort); @@ -261,6 +294,16 @@ static bool iio_dma_block_reusable(struct iio_dma_buffer_block *block) } } +static bool iio_dma_buffer_can_use_fileio(struct iio_dma_buffer_queue *queue) +{ + /* + * Note that queue->num_dmabufs cannot increase while the queue is + * locked, it can only decrease, so it does not race against + * iio_dma_buffer_alloc_block(). + */ + return queue->fileio.enabled || !atomic_read(&queue->num_dmabufs); +} + /** * iio_dma_buffer_request_update() - DMA buffer request_update callback * @buffer: The buffer which to request an update @@ -287,6 +330,12 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) mutex_lock(&queue->lock); + queue->fileio.enabled = iio_dma_buffer_can_use_fileio(queue); + + /* If DMABUFs were created, disable fileio interface */ + if (!queue->fileio.enabled) + goto out_unlock; + /* Allocations are page aligned */ if (PAGE_ALIGN(queue->fileio.block_size) == PAGE_ALIGN(size)) try_reuse = true; @@ -327,7 +376,7 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) } if (!block) { - block = iio_dma_buffer_alloc_block(queue, size); + block = iio_dma_buffer_alloc_block(queue, size, true); if (!block) { ret = -ENOMEM; goto out_unlock; @@ -384,8 +433,12 @@ static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue, block->state = IIO_BLOCK_STATE_ACTIVE; iio_buffer_block_get(block); + ret = queue->ops->submit(queue, block); if (ret) { + if (!block->fileio) + iio_buffer_signal_dmabuf_done(block->fence, ret); + /* * This is a bit of a problem and there is not much we can do * other then wait for the buffer to be disabled and re-enabled @@ -588,6 +641,112 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf) } EXPORT_SYMBOL_GPL(iio_dma_buffer_data_available); +struct iio_dma_buffer_block * +iio_dma_buffer_attach_dmabuf(struct iio_buffer *buffer, + struct dma_buf_attachment *attach) +{ + struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer); + struct iio_dma_buffer_block *block; + + guard(mutex)(&queue->lock); + + /* + * If the buffer is enabled and in fileio mode new blocks can't be + * allocated. + */ + if (queue->fileio.enabled) + return ERR_PTR(-EBUSY); + + block = iio_dma_buffer_alloc_block(queue, attach->dmabuf->size, false); + if (!block) + return ERR_PTR(-ENOMEM); + + block->attach = attach; + + /* Free memory that might be in use for fileio mode */ + iio_dma_buffer_fileio_free(queue); + + return block; +} +EXPORT_SYMBOL_GPL(iio_dma_buffer_attach_dmabuf); + +void iio_dma_buffer_detach_dmabuf(struct iio_buffer *buffer, + struct iio_dma_buffer_block *block) +{ + block->state = IIO_BLOCK_STATE_DEAD; + iio_buffer_block_put_atomic(block); +} +EXPORT_SYMBOL_GPL(iio_dma_buffer_detach_dmabuf); + +static int iio_dma_can_enqueue_block(struct iio_dma_buffer_block *block) +{ + struct iio_dma_buffer_queue *queue = block->queue; + + /* If in fileio mode buffers can't be enqueued. */ + if (queue->fileio.enabled) + return -EBUSY; + + switch (block->state) { + case IIO_BLOCK_STATE_QUEUED: + return -EPERM; + case IIO_BLOCK_STATE_ACTIVE: + case IIO_BLOCK_STATE_DEAD: + return -EBUSY; + case IIO_BLOCK_STATE_DONE: + break; + } + + return 0; +} + +int iio_dma_buffer_enqueue_dmabuf(struct iio_buffer *buffer, + struct iio_dma_buffer_block *block, + struct dma_fence *fence, + struct sg_table *sgt, + size_t size, bool cyclic) +{ + struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer); + bool cookie; + int ret; + + WARN_ON(!mutex_is_locked(&queue->lock)); + + cookie = dma_fence_begin_signalling(); + + ret = iio_dma_can_enqueue_block(block); + if (ret < 0) + goto out_end_signalling; + + block->bytes_used = size; + block->cyclic = cyclic; + block->sg_table = sgt; + block->fence = fence; + + iio_dma_buffer_enqueue(queue, block); + +out_end_signalling: + dma_fence_end_signalling(cookie); + + return ret; +} +EXPORT_SYMBOL_GPL(iio_dma_buffer_enqueue_dmabuf); + +void iio_dma_buffer_lock_queue(struct iio_buffer *buffer) +{ + struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer); + + mutex_lock(&queue->lock); +} +EXPORT_SYMBOL_GPL(iio_dma_buffer_lock_queue); + +void iio_dma_buffer_unlock_queue(struct iio_buffer *buffer) +{ + struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer); + + mutex_unlock(&queue->lock); +} +EXPORT_SYMBOL_GPL(iio_dma_buffer_unlock_queue); + /** * iio_dma_buffer_set_bytes_per_datum() - DMA buffer set_bytes_per_datum callback * @buffer: Buffer to set the bytes-per-datum for diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h index 18d3702fa95d..b62ff2a3f554 100644 --- a/include/linux/iio/buffer-dma.h +++ b/include/linux/iio/buffer-dma.h @@ -7,6 +7,7 @@ #ifndef __INDUSTRIALIO_DMA_BUFFER_H__ #define __INDUSTRIALIO_DMA_BUFFER_H__ +#include #include #include #include @@ -16,6 +17,9 @@ struct iio_dma_buffer_queue; struct iio_dma_buffer_ops; struct device; +struct dma_buf_attachment; +struct dma_fence; +struct sg_table; /** * enum iio_block_state - State of a struct iio_dma_buffer_block @@ -41,6 +45,8 @@ enum iio_block_state { * @queue: Parent DMA buffer queue * @kref: kref used to manage the lifetime of block * @state: Current state of the block + * @cyclic: True if this is a cyclic buffer + * @fileio: True if this buffer is used for fileio mode */ struct iio_dma_buffer_block { /* May only be accessed by the owner of the block */ @@ -63,6 +69,14 @@ struct iio_dma_buffer_block { * queue->list_lock if the block is not owned by the core. */ enum iio_block_state state; + + bool cyclic; + bool fileio; + + struct dma_buf_attachment *attach; + struct sg_table *sg_table; + + struct dma_fence *fence; }; /** @@ -72,6 +86,7 @@ struct iio_dma_buffer_block { * @pos: Read offset in the active block * @block_size: Size of each block * @next_dequeue: index of next block that will be dequeued + * @enabled: Whether the buffer is operating in fileio mode */ struct iio_dma_buffer_queue_fileio { struct iio_dma_buffer_block *blocks[2]; @@ -80,6 +95,7 @@ struct iio_dma_buffer_queue_fileio { size_t block_size; unsigned int next_dequeue; + bool enabled; }; /** @@ -95,6 +111,7 @@ struct iio_dma_buffer_queue_fileio { * the DMA controller * @incoming: List of buffers on the incoming queue * @active: Whether the buffer is currently active + * @num_dmabufs: Total number of DMABUFs attached to this queue * @fileio: FileIO state */ struct iio_dma_buffer_queue { @@ -107,6 +124,7 @@ struct iio_dma_buffer_queue { struct list_head incoming; bool active; + atomic_t num_dmabufs; struct iio_dma_buffer_queue_fileio fileio; }; @@ -142,4 +160,17 @@ int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue, void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue); void iio_dma_buffer_release(struct iio_dma_buffer_queue *queue); +struct iio_dma_buffer_block * +iio_dma_buffer_attach_dmabuf(struct iio_buffer *buffer, + struct dma_buf_attachment *attach); +void iio_dma_buffer_detach_dmabuf(struct iio_buffer *buffer, + struct iio_dma_buffer_block *block); +int iio_dma_buffer_enqueue_dmabuf(struct iio_buffer *buffer, + struct iio_dma_buffer_block *block, + struct dma_fence *fence, + struct sg_table *sgt, + size_t size, bool cyclic); +void iio_dma_buffer_lock_queue(struct iio_buffer *buffer); +void iio_dma_buffer_unlock_queue(struct iio_buffer *buffer); + #endif -- 2.43.0