Received: by 2002:a05:6512:2355:0:0:0:0 with SMTP id p21csp5520459lfu; Mon, 28 Mar 2022 15:53:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzMat+RGk0HI1ob+t59EJct6QgiuZQvFfvNRb6si45Ph30oiKHt7vwc4CNpgbRDoXZnvh5a X-Received: by 2002:a05:6a00:1591:b0:4fa:ef3e:6005 with SMTP id u17-20020a056a00159100b004faef3e6005mr22731884pfk.27.1648508021501; Mon, 28 Mar 2022 15:53:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1648508021; cv=none; d=google.com; s=arc-20160816; b=r5LccjpprIPs/kVo/0AYYBI4cYBgTmbnl+tFxmtCdiuFAZbgFppTPI3R4Nyz3JBPbL bKm2nYUwpwnolKMSytEm5dsrIDYi8m+pm1rmgfjiWRO1K1hQeBclcdr0tJ0ngLsHSyna 29l//xOmsSqFs+ls0aY9YjFuZ0zVCTIF9El3CxRbHeM6xHJ1Oi9RWsJku/zjGv4DCrTh J0Cn4t/KIm96IHBOsWMXIgpV3zt+sHmwY2NXFFkzI7whVf9GvMzz1byuqJ9fTzIniw1D rzGPmPhARxAxEjhg9FaCWha7GJ+D5ANhL/Y4VLrjce+5R0T16IiJ9ma+sDjdi+fstiXv mtHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=+csOBLam2qljixrMpHSj1ZbG+3GfzZE7WyAMtmQKRSk=; b=vMfykelsvF42zxcqd13nOaSafByfoG+FFVNtrkZU2d/QuOW4VzUF5SV74lJMKzGXFz vADabfe8cbMc6OQWwlBETLVKuGbaCNyOLXFdhzVtneKj/gAStyRN+1HYheBkAU9d9wpJ aaWRUwF/se4IuL2GnZ/crOHgIEWBh7yisxtTidNfdscsO4YCbWmDQKVs4ApW8T0aOtRB UoW/BHVObj6JYkwejJ68PIUEqPpE6SHhOdWMzTKSStjiUw2x/YtwcGJ7gYV3SMWYr3zx 8cRNfTz0C3F4UpjjNWJZt3RO5ac3EJqO4S6Cun1MmzCgLltn8rxbQcy77f+ZKEbeQ2dT PBJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="ANn73/tB"; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id d188-20020a6368c5000000b003816043f044si14823488pgc.569.2022.03.28.15.53.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Mar 2022 15:53:41 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="ANn73/tB"; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BC08822F3C3; Mon, 28 Mar 2022 14:56:01 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244385AbiC1RLQ (ORCPT + 99 others); Mon, 28 Mar 2022 13:11:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233834AbiC1RLO (ORCPT ); Mon, 28 Mar 2022 13:11:14 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E460120F6E; Mon, 28 Mar 2022 10:09:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 711CC60B60; Mon, 28 Mar 2022 17:09:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D1C5C340F0; Mon, 28 Mar 2022 17:09:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1648487371; bh=oCtqENZsA9xmE/aUzsGysdGfo0sSAd6M0OVrV65PQGE=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=ANn73/tBHo8HgNrDb6u3OBvKw7gq7pWuGEU9i01xwmeL4zP5+1ltHwtSCziAp1mfS EN0i7leC6j376PY0Dj76wpv08ALRaQr1GQhze9jKBFq+LfacN5RsjYubH0uAPAHyeD USIb6C11EvBflFNVF6upatiy23HmXQ0CpTZpm78W579pFZaVllfXebXFhRYIkuPAe4 hmpUYhAW/EYQvfb5M4SGmjMwJSux4Lt/YaLf694mHN0ACOpsK311AF6Do73aP+euXA cx9lopYGKlQxcZBOBAccl8SM1Quau5vtDs2kGejsD5DWDlCW5WolVjbcmiSMk+cCnf rmVBg3DhGjh1g== Date: Mon, 28 Mar 2022 18:17:06 +0100 From: Jonathan Cameron To: Paul Cercueil Cc: Michael Hennerich , Lars-Peter Clausen , Christian =?UTF-8?B?S8O2bmln?= , Sumit Semwal , Jonathan Corbet , Alexandru Ardelean , dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-iio@vger.kernel.org Subject: Re: [PATCH v2 01/12] iio: buffer-dma: Get rid of outgoing queue Message-ID: <20220328181706.4148e08b@jic23-huawei> In-Reply-To: <20220207125933.81634-2-paul@crapouillou.net> References: <20220207125933.81634-1-paul@crapouillou.net> <20220207125933.81634-2-paul@crapouillou.net> X-Mailer: Claws Mail 4.0.0 (GTK+ 3.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 7 Feb 2022 12:59:22 +0000 Paul Cercueil wrote: > The buffer-dma code was using two queues, incoming and outgoing, to > manage the state of the blocks in use. > > While this totally works, it adds some complexity to the code, > especially since the code only manages 2 blocks. It is much easier to > just check each block's state manually, and keep a counter for the next > block to dequeue. > > Since the new DMABUF based API wouldn't use the outgoing queue anyway, > getting rid of it now makes the upcoming changes simpler. > > With this change, the IIO_BLOCK_STATE_DEQUEUED is now useless, and can > be removed. > > v2: - Only remove the outgoing queue, and keep the incoming queue, as we > want the buffer to start streaming data as soon as it is enabled. > - Remove IIO_BLOCK_STATE_DEQUEUED, since it is now functionally the > same as IIO_BLOCK_STATE_DONE. > > Signed-off-by: Paul Cercueil Hi Paul, In the interests of moving things forward / simplifying what people need to look at: This change looks good to me on it's own. Lars had some comments on v1. Lars, could you take look at this and verify if this versions addresses the points you raised (I think it does but they were your comments so better you judge) Thanks, Jonathan > --- > drivers/iio/buffer/industrialio-buffer-dma.c | 44 ++++++++++---------- > include/linux/iio/buffer-dma.h | 7 ++-- > 2 files changed, 26 insertions(+), 25 deletions(-) > > diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c > index d348af8b9705..1fc91467d1aa 100644 > --- a/drivers/iio/buffer/industrialio-buffer-dma.c > +++ b/drivers/iio/buffer/industrialio-buffer-dma.c > @@ -179,7 +179,7 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( > } > > block->size = size; > - block->state = IIO_BLOCK_STATE_DEQUEUED; > + block->state = IIO_BLOCK_STATE_DONE; > block->queue = queue; > INIT_LIST_HEAD(&block->head); > kref_init(&block->kref); > @@ -191,16 +191,8 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block( > > static void _iio_dma_buffer_block_done(struct iio_dma_buffer_block *block) > { > - struct iio_dma_buffer_queue *queue = block->queue; > - > - /* > - * The buffer has already been freed by the application, just drop the > - * reference. > - */ > - if (block->state != IIO_BLOCK_STATE_DEAD) { > + if (block->state != IIO_BLOCK_STATE_DEAD) > block->state = IIO_BLOCK_STATE_DONE; > - list_add_tail(&block->head, &queue->outgoing); > - } > } > > /** > @@ -261,7 +253,6 @@ static bool iio_dma_block_reusable(struct iio_dma_buffer_block *block) > * not support abort and has not given back the block yet. > */ > switch (block->state) { > - case IIO_BLOCK_STATE_DEQUEUED: > case IIO_BLOCK_STATE_QUEUED: > case IIO_BLOCK_STATE_DONE: > return true; > @@ -317,7 +308,6 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) > * dead. This means we can reset the lists without having to fear > * corrution. > */ > - INIT_LIST_HEAD(&queue->outgoing); > spin_unlock_irq(&queue->list_lock); > > INIT_LIST_HEAD(&queue->incoming); > @@ -456,14 +446,20 @@ static struct iio_dma_buffer_block *iio_dma_buffer_dequeue( > struct iio_dma_buffer_queue *queue) > { > struct iio_dma_buffer_block *block; > + unsigned int idx; > > spin_lock_irq(&queue->list_lock); > - block = list_first_entry_or_null(&queue->outgoing, struct > - iio_dma_buffer_block, head); > - if (block != NULL) { > - list_del(&block->head); > - block->state = IIO_BLOCK_STATE_DEQUEUED; > + > + idx = queue->fileio.next_dequeue; > + block = queue->fileio.blocks[idx]; > + > + if (block->state == IIO_BLOCK_STATE_DONE) { > + idx = (idx + 1) % ARRAY_SIZE(queue->fileio.blocks); > + queue->fileio.next_dequeue = idx; > + } else { > + block = NULL; > } > + > spin_unlock_irq(&queue->list_lock); > > return block; > @@ -539,6 +535,7 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf) > struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buf); > struct iio_dma_buffer_block *block; > size_t data_available = 0; > + unsigned int i; > > /* > * For counting the available bytes we'll use the size of the block not > @@ -552,8 +549,15 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf) > data_available += queue->fileio.active_block->size; > > spin_lock_irq(&queue->list_lock); > - list_for_each_entry(block, &queue->outgoing, head) > - data_available += block->size; > + > + for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { > + block = queue->fileio.blocks[i]; > + > + if (block != queue->fileio.active_block > + && block->state == IIO_BLOCK_STATE_DONE) > + data_available += block->size; > + } > + > spin_unlock_irq(&queue->list_lock); > mutex_unlock(&queue->lock); > > @@ -617,7 +621,6 @@ int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue, > queue->ops = ops; > > INIT_LIST_HEAD(&queue->incoming); > - INIT_LIST_HEAD(&queue->outgoing); > > mutex_init(&queue->lock); > spin_lock_init(&queue->list_lock); > @@ -645,7 +648,6 @@ void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue) > continue; > queue->fileio.blocks[i]->state = IIO_BLOCK_STATE_DEAD; > } > - INIT_LIST_HEAD(&queue->outgoing); > spin_unlock_irq(&queue->list_lock); > > INIT_LIST_HEAD(&queue->incoming); > diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h > index 6564bdcdac66..18d3702fa95d 100644 > --- a/include/linux/iio/buffer-dma.h > +++ b/include/linux/iio/buffer-dma.h > @@ -19,14 +19,12 @@ struct device; > > /** > * enum iio_block_state - State of a struct iio_dma_buffer_block > - * @IIO_BLOCK_STATE_DEQUEUED: Block is not queued > * @IIO_BLOCK_STATE_QUEUED: Block is on the incoming queue > * @IIO_BLOCK_STATE_ACTIVE: Block is currently being processed by the DMA > * @IIO_BLOCK_STATE_DONE: Block is on the outgoing queue > * @IIO_BLOCK_STATE_DEAD: Block has been marked as to be freed > */ > enum iio_block_state { > - IIO_BLOCK_STATE_DEQUEUED, > IIO_BLOCK_STATE_QUEUED, > IIO_BLOCK_STATE_ACTIVE, > IIO_BLOCK_STATE_DONE, > @@ -73,12 +71,15 @@ struct iio_dma_buffer_block { > * @active_block: Block being used in read() > * @pos: Read offset in the active block > * @block_size: Size of each block > + * @next_dequeue: index of next block that will be dequeued > */ > struct iio_dma_buffer_queue_fileio { > struct iio_dma_buffer_block *blocks[2]; > struct iio_dma_buffer_block *active_block; > size_t pos; > size_t block_size; > + > + unsigned int next_dequeue; > }; > > /** > @@ -93,7 +94,6 @@ struct iio_dma_buffer_queue_fileio { > * list and typically also a list of active blocks in the part that handles > * the DMA controller > * @incoming: List of buffers on the incoming queue > - * @outgoing: List of buffers on the outgoing queue > * @active: Whether the buffer is currently active > * @fileio: FileIO state > */ > @@ -105,7 +105,6 @@ struct iio_dma_buffer_queue { > struct mutex lock; > spinlock_t list_lock; > struct list_head incoming; > - struct list_head outgoing; > > bool active; >