Received: by 2002:a05:6512:2355:0:0:0:0 with SMTP id p21csp5522913lfu; Mon, 28 Mar 2022 15:56:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxiJmDhHHvVvwnZQTsXUHjhuRZ6vsFRZArLWPsmeI9OLpkC3NVRtWoHdI2ZqtKhz2wTVjLD X-Received: by 2002:a05:6102:302a:b0:325:4cea:b1c8 with SMTP id v10-20020a056102302a00b003254ceab1c8mr13476198vsa.75.1648508203669; Mon, 28 Mar 2022 15:56:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1648508203; cv=none; d=google.com; s=arc-20160816; b=LluHvCINEiVsCw6FntyXxS0KtFXerbyVq9SGOFxC5lIB8LrAnUM5qp+5lykY4/yEXk gd1iO4AKJAG2Q/3cJVshASqNFvcs2QQjnbAOfgWkg2ImxV0LEOteUdZQ+f/Br6zu/MaL Fpz2vgk8NW9QL6zVQpfjcgS9Wln+zrQ565K1zO+Mng8vVSuwBdrgqlc+ra3wqx/K/QsI GeQjHO4gBLMO+25ePIyiJMvw4AsxPTZnvnJWfnnigfrFCTM7AkIMHzY3WkSIxUM6Gdxw Kqn1OcZz79wrQJoX+Q1z6Vo1RN7RwZfxkgUHclNTW+ajyHBh9HoNryx2NIzXVU5KCwKc ZjaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=u7D9qE8KFBrELpPKu8l806NwjwYrrVuMu8Ljt5jRMo8=; b=XBoRn/uv4RpoFzyZO46oXDd0hSa8t0qobtQVD83IqhjUm/5G+Ovn9YCVHZhfcNqJE/ w58j32CoLeD+SKHSi4LXGZdlcUXPn2p1D2KLiRo1jSsZwJKgFtfKrBcRgWUnKJ6L7w1h 20X2zOvOkQlYPzlFmqeeTfpXfklIjvopJKXQSV391HX5aRIdcwbQEPQxpqPHhCR1ungn fb60kGFlyGJBrksMm9SUBFZXQePAELDHm9+R9Ry5ygz8Tpngd225+yQqJDBEYY6JFvFB I1OIDDz9rgoob7JuuApC8nrd6PS1qMzPr75THmXnmMTN1ew7w9laiFMccpaJytAJg38F 2S3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=KyFI9A4n; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id u26-20020a67c81a000000b003254d2733bdsi3485591vsk.7.2022.03.28.15.56.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Mar 2022 15:56:43 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=KyFI9A4n; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4494B26C2DA; Mon, 28 Mar 2022 14:57:46 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244530AbiC1RST (ORCPT + 99 others); Mon, 28 Mar 2022 13:18:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244523AbiC1RSR (ORCPT ); Mon, 28 Mar 2022 13:18:17 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6FAF262122; Mon, 28 Mar 2022 10:16:36 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1E811B81123; Mon, 28 Mar 2022 17:16:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 077F5C34100; Mon, 28 Mar 2022 17:16:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1648487793; bh=7/PboG33OIqzdypfO/o8d8wuwlx5CdgNBv1hZS48BT8=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=KyFI9A4n86BFioR3tiaZuphvFDmuCaBuokvDyYldUtnhZECJHQVzrk0b2x12Ksncu 4vCY67FLNTvmRjtedj5R5BRKPsId2jXOv7T7wfAaHD170RYk2/cnHwGCHFJeKpshrv 2uiE4RIjwWekFGgjzRe1vswFFE3hK22oOgQj8Wr2yo5j1+Ufrvn9ZQQlBUsWVDjKQR hXNILyg8VU8hx01EZc+kQUkHGlGoEpZPMeYYZvLGVqTNw4HIo2oGHUnmAmcTq2buzY eR78uXKSOqoHv8TGFQTnj7DC5zCoMZYiMS+UMZzNJjCWGIzQ7GNYW8J9cpQpUArRxA cv7JQYiAJrAqA== Date: Mon, 28 Mar 2022 18:24:09 +0100 From: Jonathan Cameron To: Paul Cercueil Cc: Michael Hennerich , Lars-Peter Clausen , Christian =?UTF-8?B?S8O2bmln?= , Sumit Semwal , Jonathan Corbet , Alexandru Ardelean , dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-iio@vger.kernel.org Subject: Re: [PATCH v2 02/12] iio: buffer-dma: Enable buffer write support Message-ID: <20220328182409.1e959386@jic23-huawei> In-Reply-To: <20220207125933.81634-3-paul@crapouillou.net> References: <20220207125933.81634-1-paul@crapouillou.net> <20220207125933.81634-3-paul@crapouillou.net> X-Mailer: Claws Mail 4.0.0 (GTK+ 3.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 7 Feb 2022 12:59:23 +0000 Paul Cercueil wrote: > Adding write support to the buffer-dma code is easy - the write() > function basically needs to do the exact same thing as the read() > function: dequeue a block, read or write the data, enqueue the block > when entirely processed. > > Therefore, the iio_buffer_dma_read() and the new iio_buffer_dma_write() > now both call a function iio_buffer_dma_io(), which will perform this > task. > > The .space_available() callback can return the exact same value as the > .data_available() callback for input buffers, since in both cases we > count the exact same thing (the number of bytes in each available > block). > > Note that we preemptively reset block->bytes_used to the buffer's size > in iio_dma_buffer_request_update(), as in the future the > iio_dma_buffer_enqueue() function won't reset it. > > v2: - Fix block->state not being reset in > iio_dma_buffer_request_update() for output buffers. > - Only update block->bytes_used once and add a comment about why we > update it. > - Add a comment about why we're setting a different state for output > buffers in iio_dma_buffer_request_update() > - Remove useless cast to bool (!!) in iio_dma_buffer_io() > > Signed-off-by: Paul Cercueil > Reviewed-by: Alexandru Ardelean One comment inline. I'd be tempted to queue this up with that fixed, but do we have any users? Even though it's trivial I'm not that keen on code upstream well in advance of it being used. Thanks, Jonathan > --- > drivers/iio/buffer/industrialio-buffer-dma.c | 88 ++++++++++++++++---- > include/linux/iio/buffer-dma.h | 7 ++ > 2 files changed, 79 insertions(+), 16 deletions(-) > > diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c > index 1fc91467d1aa..a9f1b673374f 100644 > --- a/drivers/iio/buffer/industrialio-buffer-dma.c > +++ b/drivers/iio/buffer/industrialio-buffer-dma.c > @@ -195,6 +195,18 @@ static void _iio_dma_buffer_block_done(struct iio_dma_buffer_block *block) > block->state = IIO_BLOCK_STATE_DONE; > } > > +static void iio_dma_buffer_queue_wake(struct iio_dma_buffer_queue *queue) > +{ > + __poll_t flags; > + > + if (queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) > + flags = EPOLLIN | EPOLLRDNORM; > + else > + flags = EPOLLOUT | EPOLLWRNORM; > + > + wake_up_interruptible_poll(&queue->buffer.pollq, flags); > +} > + > /** > * iio_dma_buffer_block_done() - Indicate that a block has been completed > * @block: The completed block > @@ -212,7 +224,7 @@ void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block) > spin_unlock_irqrestore(&queue->list_lock, flags); > > iio_buffer_block_put_atomic(block); > - wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM); > + iio_dma_buffer_queue_wake(queue); > } > EXPORT_SYMBOL_GPL(iio_dma_buffer_block_done); > > @@ -241,7 +253,7 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue, > } > spin_unlock_irqrestore(&queue->list_lock, flags); > > - wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM); > + iio_dma_buffer_queue_wake(queue); > } > EXPORT_SYMBOL_GPL(iio_dma_buffer_block_list_abort); > > @@ -335,8 +347,24 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) > queue->fileio.blocks[i] = block; > } > > - block->state = IIO_BLOCK_STATE_QUEUED; > - list_add_tail(&block->head, &queue->incoming); > + /* > + * block->bytes_used may have been modified previously, e.g. by > + * iio_dma_buffer_block_list_abort(). Reset it here to the > + * block's so that iio_dma_buffer_io() will work. > + */ > + block->bytes_used = block->size; > + > + /* > + * If it's an input buffer, mark the block as queued, and > + * iio_dma_buffer_enable() will submit it. Otherwise mark it as > + * done, which means it's ready to be dequeued. > + */ > + if (queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) { > + block->state = IIO_BLOCK_STATE_QUEUED; > + list_add_tail(&block->head, &queue->incoming); > + } else { > + block->state = IIO_BLOCK_STATE_DONE; > + } > } > > out_unlock: > @@ -465,20 +493,12 @@ static struct iio_dma_buffer_block *iio_dma_buffer_dequeue( > return block; > } > > -/** > - * iio_dma_buffer_read() - DMA buffer read callback > - * @buffer: Buffer to read form > - * @n: Number of bytes to read > - * @user_buffer: Userspace buffer to copy the data to > - * > - * Should be used as the read callback for iio_buffer_access_ops > - * struct for DMA buffers. > - */ > -int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n, > - char __user *user_buffer) > +static int iio_dma_buffer_io(struct iio_buffer *buffer, > + size_t n, char __user *user_buffer, bool is_write) > { > struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer); > struct iio_dma_buffer_block *block; > + void *addr; > int ret; > > if (n < buffer->bytes_per_datum) > @@ -501,8 +521,13 @@ int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n, > n = rounddown(n, buffer->bytes_per_datum); > if (n > block->bytes_used - queue->fileio.pos) > n = block->bytes_used - queue->fileio.pos; > + addr = block->vaddr + queue->fileio.pos; > > - if (copy_to_user(user_buffer, block->vaddr + queue->fileio.pos, n)) { > + if (is_write) > + ret = copy_from_user(addr, user_buffer, n); > + else > + ret = copy_to_user(user_buffer, addr, n); > + if (ret) { > ret = -EFAULT; > goto out_unlock; > } > @@ -521,8 +546,39 @@ int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n, > > return ret; > } > + > +/** > + * iio_dma_buffer_read() - DMA buffer read callback > + * @buffer: Buffer to read form > + * @n: Number of bytes to read > + * @user_buffer: Userspace buffer to copy the data to > + * > + * Should be used as the read callback for iio_buffer_access_ops > + * struct for DMA buffers. > + */ > +int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n, > + char __user *user_buffer) > +{ > + return iio_dma_buffer_io(buffer, n, user_buffer, false); > +} > EXPORT_SYMBOL_GPL(iio_dma_buffer_read); > > +/** > + * iio_dma_buffer_write() - DMA buffer write callback > + * @buffer: Buffer to read form > + * @n: Number of bytes to read > + * @user_buffer: Userspace buffer to copy the data from > + * > + * Should be used as the write callback for iio_buffer_access_ops > + * struct for DMA buffers. > + */ > +int iio_dma_buffer_write(struct iio_buffer *buffer, size_t n, > + const char __user *user_buffer) > +{ > + return iio_dma_buffer_io(buffer, n, (__force char *)user_buffer, true); Casting away the const is a little nasty. Perhaps it's worth adding a parameter to iio_dma_buffer_io so you can have different parameters for the read and write cases and hence keep the const in place? return iio_dma_buffer_io(buffer, n, NULL, user_buffer, true); and return iio_dma_buffer_io(buffer,n, user_buffer, NULL, false); > +} > +EXPORT_SYMBOL_GPL(iio_dma_buffer_write); > + > /** > * iio_dma_buffer_data_available() - DMA buffer data_available callback > * @buf: Buffer to check for data availability > diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h > index 18d3702fa95d..490b93f76fa8 100644 > --- a/include/linux/iio/buffer-dma.h > +++ b/include/linux/iio/buffer-dma.h > @@ -132,6 +132,8 @@ int iio_dma_buffer_disable(struct iio_buffer *buffer, > struct iio_dev *indio_dev); > int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n, > char __user *user_buffer); > +int iio_dma_buffer_write(struct iio_buffer *buffer, size_t n, > + const char __user *user_buffer); > size_t iio_dma_buffer_data_available(struct iio_buffer *buffer); > int iio_dma_buffer_set_bytes_per_datum(struct iio_buffer *buffer, size_t bpd); > int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length); > @@ -142,4 +144,9 @@ int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue, > void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue); > void iio_dma_buffer_release(struct iio_dma_buffer_queue *queue); > > +static inline size_t iio_dma_buffer_space_available(struct iio_buffer *buffer) > +{ > + return iio_dma_buffer_data_available(buffer); > +} > + > #endif