Received: by 2002:a05:7412:f690:b0:e2:908c:2ebd with SMTP id ej16csp324718rdb; Thu, 19 Oct 2023 05:53:31 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHjWqEqZtaF0f+h93WErLx1yZIph/gB591EfgoqwP01SMcJ4/I20Ru1KxMmLhzkm2nSOXsp X-Received: by 2002:a17:902:eb8b:b0:1ca:b26a:9715 with SMTP id q11-20020a170902eb8b00b001cab26a9715mr2063786plg.6.1697720011108; Thu, 19 Oct 2023 05:53:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697720011; cv=none; d=google.com; s=arc-20160816; b=jDBMuwnyiMt6kQ156XFc2V4kACFkOav9ltrz3d+lprHbQxGMua7A3JyrqH8P903ajQ tHmLJPZ0Ye0UF38fHBX0JCgdKi+OWZyI0fDFHorlIAL5bhV30zKnXS2nhUbtlVaogmsq wiJB+pUfOGHvS6900EJdnWuTkVl00r9/MNB02FS0an/g3rlImZ+Glm7qezNkfcEnjUEj rWLg+32a7GGW02vjbKnUZIHTyQTZN04XriyVA9iZpas0s6pI283R2kdOVegRHNUyZutL r9dft0hRoWnZG7sPXZKiLxcyH9kQ+Tbvp08/++8CMYPDM4fJp8FOs2O7UN80DUAApXX7 piRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=iWmHL8Of6DBw/g3XEBJR3E+ZMFz/yY6lkptLTAnFnAY=; fh=ceA81l8d2hZW6pWH7VE5TwJxtgXUzAubfWIMcikPA1M=; b=NKbCApgVeZJxHG3QQzCMjxzSkUwTw1xJ37V9pMyBxG8klk+vxC2rR7LUaP014WadbQ VddCr62LeFpaoEMzJSED8hbjMFaEJhmdRkOqBILE/YoDtXqHyYOOZoilzR98lDxeEljh 0vO06imtbJJ2ButL4c8DlMzUng0Nv2N9F9dROYdUbiME/80I6qkYHKrCyd8O8cHr7/CX rnoFwSwouvN36xHh13vIHepnCFQwTCwumLQvi0Af2OK6VCWsophLlxXsNbyxtxRm99ja W4sWoQZ74v+sbBe27Wak6D8NJNVp2onmgJd+12gM2jSnHobr0ofXqhUxNMcCcuS/Lox8 m8lA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=NT3rFbaL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id z20-20020a170902ee1400b001b8a67f1c10si1972486plb.468.2023.10.19.05.53.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Oct 2023 05:53:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=NT3rFbaL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 8966F822D03C; Thu, 19 Oct 2023 05:53:29 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345927AbjJSMxQ (ORCPT + 99 others); Thu, 19 Oct 2023 08:53:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345746AbjJSMwj (ORCPT ); Thu, 19 Oct 2023 08:52:39 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CADA115; Thu, 19 Oct 2023 05:52:36 -0700 (PDT) Received: from benjamin-XPS-13-9310.. (unknown [IPv6:2a01:e0a:120:3210:ccea:1fb5:34eb:239b]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: benjamin.gaignard) by madras.collabora.co.uk (Postfix) with ESMTPSA id A86F86607341; Thu, 19 Oct 2023 13:52:34 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1697719955; bh=DeELvsvGfX8ufPLrGM7lR1Bfmn8jvw7ej7nSKD+mebw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NT3rFbaLac+W+nlACtRlThRPNEkJ/LM2tLRRqz3KsPCV5V3jG3hvQi7sSlgCQmOFU YtCZEk8TKiNir2o84JP3qBwcK24c0bQXDRJfh0BILDXeTtw0bc23n189BFu0MdIbxO SfEgXPLAgRudP85UkhJh9s1e+LsnmfgE1x2A16oc8tVcPuTh5ux3W7BmrnuZO/fqMy eUq3ItE0byZIHfNEx66Cu+3s/o957klD+43O9KWhGYEb1FN1AR8+vaIgvOW78sTNym 8+l+TBSMq+12KOw8Se0x/l78vKHi7eZSKYsXssGiicMXESotZjD9Ld9hckTznEhWdh Y+UrDc2/scbXw== From: Benjamin Gaignard To: mchehab@kernel.org, tfiga@chromium.org, m.szyprowski@samsung.com, ming.qian@nxp.com, ezequiel@vanguardiasur.com.ar, p.zabel@pengutronix.de, gregkh@linuxfoundation.org, hverkuil-cisco@xs4all.nl, nicolas.dufresne@collabora.com Cc: linux-media@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-rockchip@lists.infradead.org, linux-staging@lists.linux.dev, kernel@collabora.com, Benjamin Gaignard Subject: [PATCH v13 08/56] media: videobuf2: Use vb2_get_num_buffers() helper Date: Thu, 19 Oct 2023 14:51:34 +0200 Message-Id: <20231019125222.21370-9-benjamin.gaignard@collabora.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231019125222.21370-1-benjamin.gaignard@collabora.com> References: <20231019125222.21370-1-benjamin.gaignard@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Thu, 19 Oct 2023 05:53:29 -0700 (PDT) Stop using queue num_buffers field directly, instead use vb2_get_num_buffers(). This prepares for the future 'delete buffers' feature where there are holes in the buffer indices. Signed-off-by: Benjamin Gaignard --- .../media/common/videobuf2/videobuf2-core.c | 92 +++++++++++-------- .../media/common/videobuf2/videobuf2-v4l2.c | 4 +- 2 files changed, 54 insertions(+), 42 deletions(-) diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c index b406a30a9b35..c5c5ae4d213d 100644 --- a/drivers/media/common/videobuf2/videobuf2-core.c +++ b/drivers/media/common/videobuf2/videobuf2-core.c @@ -444,13 +444,14 @@ static int __vb2_queue_alloc(struct vb2_queue *q, enum vb2_memory memory, unsigned int num_buffers, unsigned int num_planes, const unsigned plane_sizes[VB2_MAX_PLANES]) { + unsigned int q_num_buffers = vb2_get_num_buffers(q); unsigned int buffer, plane; struct vb2_buffer *vb; int ret; /* Ensure that q->num_buffers+num_buffers is below VB2_MAX_FRAME */ num_buffers = min_t(unsigned int, num_buffers, - VB2_MAX_FRAME - q->num_buffers); + VB2_MAX_FRAME - q_num_buffers); for (buffer = 0; buffer < num_buffers; ++buffer) { /* Allocate vb2 buffer structures */ @@ -470,7 +471,7 @@ static int __vb2_queue_alloc(struct vb2_queue *q, enum vb2_memory memory, vb->planes[plane].min_length = plane_sizes[plane]; } - vb2_queue_add_buffer(q, vb, q->num_buffers + buffer); + vb2_queue_add_buffer(q, vb, q_num_buffers + buffer); call_void_bufop(q, init_buffer, vb); /* Allocate video buffer memory for the MMAP type */ @@ -514,8 +515,9 @@ static void __vb2_free_mem(struct vb2_queue *q, unsigned int buffers) { unsigned int buffer; struct vb2_buffer *vb; + unsigned int q_num_buffers = vb2_get_num_buffers(q); - for (buffer = q->num_buffers - buffers; buffer < q->num_buffers; + for (buffer = q_num_buffers - buffers; buffer < q_num_buffers; ++buffer) { vb = vb2_get_buffer(q, buffer); if (!vb) @@ -539,11 +541,12 @@ static void __vb2_free_mem(struct vb2_queue *q, unsigned int buffers) static void __vb2_queue_free(struct vb2_queue *q, unsigned int buffers) { unsigned int buffer; + unsigned int q_num_buffers = vb2_get_num_buffers(q); lockdep_assert_held(&q->mmap_lock); /* Call driver-provided cleanup function for each buffer, if provided */ - for (buffer = q->num_buffers - buffers; buffer < q->num_buffers; + for (buffer = q_num_buffers - buffers; buffer < q_num_buffers; ++buffer) { struct vb2_buffer *vb = vb2_get_buffer(q, buffer); @@ -559,7 +562,7 @@ static void __vb2_queue_free(struct vb2_queue *q, unsigned int buffers) * Check that all the calls were balanced during the life-time of this * queue. If not then dump the counters to the kernel log. */ - if (q->num_buffers) { + if (q_num_buffers) { bool unbalanced = q->cnt_start_streaming != q->cnt_stop_streaming || q->cnt_prepare_streaming != q->cnt_unprepare_streaming || q->cnt_wait_prepare != q->cnt_wait_finish; @@ -585,7 +588,7 @@ static void __vb2_queue_free(struct vb2_queue *q, unsigned int buffers) q->cnt_stop_streaming = 0; q->cnt_unprepare_streaming = 0; } - for (buffer = 0; buffer < q->num_buffers; ++buffer) { + for (buffer = 0; buffer < vb2_get_num_buffers(q); buffer++) { struct vb2_buffer *vb = vb2_get_buffer(q, buffer); bool unbalanced; @@ -637,7 +640,7 @@ static void __vb2_queue_free(struct vb2_queue *q, unsigned int buffers) #endif /* Free vb2 buffers */ - for (buffer = q->num_buffers - buffers; buffer < q->num_buffers; + for (buffer = q_num_buffers - buffers; buffer < q_num_buffers; ++buffer) { struct vb2_buffer *vb = vb2_get_buffer(q, buffer); @@ -649,7 +652,7 @@ static void __vb2_queue_free(struct vb2_queue *q, unsigned int buffers) } q->num_buffers -= buffers; - if (!q->num_buffers) { + if (!vb2_get_num_buffers(q)) { q->memory = VB2_MEMORY_UNKNOWN; INIT_LIST_HEAD(&q->queued_list); } @@ -680,7 +683,7 @@ EXPORT_SYMBOL(vb2_buffer_in_use); static bool __buffers_in_use(struct vb2_queue *q) { unsigned int buffer; - for (buffer = 0; buffer < q->num_buffers; ++buffer) { + for (buffer = 0; buffer < vb2_get_num_buffers(q); ++buffer) { struct vb2_buffer *vb = vb2_get_buffer(q, buffer); if (!vb) @@ -806,6 +809,7 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory, unsigned int flags, unsigned int *count) { unsigned int num_buffers, allocated_buffers, num_planes = 0; + unsigned int q_num_bufs = vb2_get_num_buffers(q); unsigned plane_sizes[VB2_MAX_PLANES] = { }; bool non_coherent_mem = flags & V4L2_MEMORY_FLAG_NON_COHERENT; unsigned int i; @@ -821,7 +825,7 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory, return -EBUSY; } - if (*count == 0 || q->num_buffers != 0 || + if (*count == 0 || q_num_bufs != 0 || (q->memory != VB2_MEMORY_UNKNOWN && q->memory != memory) || !verify_coherency_flags(q, non_coherent_mem)) { /* @@ -839,7 +843,7 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory, * queued without ever calling STREAMON. */ __vb2_queue_cancel(q); - __vb2_queue_free(q, q->num_buffers); + __vb2_queue_free(q, q_num_bufs); mutex_unlock(&q->mmap_lock); /* @@ -934,7 +938,7 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory, if (ret < 0) { /* * Note: __vb2_queue_free() will subtract 'allocated_buffers' - * from q->num_buffers and it will reset q->memory to + * from already queued buffers and it will reset q->memory to * VB2_MEMORY_UNKNOWN. */ __vb2_queue_free(q, allocated_buffers); @@ -968,10 +972,11 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory, unsigned int num_planes = 0, num_buffers, allocated_buffers; unsigned plane_sizes[VB2_MAX_PLANES] = { }; bool non_coherent_mem = flags & V4L2_MEMORY_FLAG_NON_COHERENT; - bool no_previous_buffers = !q->num_buffers; - int ret; + unsigned int q_num_bufs = vb2_get_num_buffers(q); + bool no_previous_buffers = !q_num_bufs; + int ret = 0; - if (q->num_buffers == VB2_MAX_FRAME) { + if (q_num_bufs == VB2_MAX_FRAME) { dprintk(q, 1, "maximum number of buffers already allocated\n"); return -ENOBUFS; } @@ -1000,7 +1005,7 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory, return -EINVAL; } - num_buffers = min(*count, VB2_MAX_FRAME - q->num_buffers); + num_buffers = min(*count, VB2_MAX_FRAME - q_num_bufs); if (requested_planes && requested_sizes) { num_planes = requested_planes; @@ -1032,7 +1037,7 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory, num_buffers = allocated_buffers; /* - * q->num_buffers contains the total number of buffers, that the + * num_buffers contains the total number of buffers, that the * queue driver has set up */ ret = call_qop(q, queue_setup, q, &num_buffers, @@ -1053,7 +1058,7 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory, if (ret < 0) { /* * Note: __vb2_queue_free() will subtract 'allocated_buffers' - * from q->num_buffers and it will reset q->memory to + * from already queued buffers and it will reset q->memory to * VB2_MEMORY_UNKNOWN. */ __vb2_queue_free(q, allocated_buffers); @@ -1670,7 +1675,7 @@ static int vb2_start_streaming(struct vb2_queue *q) * Forcefully reclaim buffers if the driver did not * correctly return them to vb2. */ - for (i = 0; i < q->num_buffers; ++i) { + for (i = 0; i < vb2_get_num_buffers(q); ++i) { vb = vb2_get_buffer(q, i); if (!vb) @@ -2076,7 +2081,7 @@ static void __vb2_queue_cancel(struct vb2_queue *q) * to vb2 in stop_streaming(). */ if (WARN_ON(atomic_read(&q->owned_by_drv_count))) { - for (i = 0; i < q->num_buffers; ++i) { + for (i = 0; i < vb2_get_num_buffers(q); i++) { struct vb2_buffer *vb = vb2_get_buffer(q, i); if (!vb) @@ -2120,7 +2125,7 @@ static void __vb2_queue_cancel(struct vb2_queue *q) * call to __fill_user_buffer() after buf_finish(). That order can't * be changed, so we can't move the buf_finish() to __vb2_dqbuf(). */ - for (i = 0; i < q->num_buffers; ++i) { + for (i = 0; i < vb2_get_num_buffers(q); i++) { struct vb2_buffer *vb; struct media_request *req; @@ -2168,6 +2173,7 @@ static void __vb2_queue_cancel(struct vb2_queue *q) int vb2_core_streamon(struct vb2_queue *q, unsigned int type) { + unsigned int q_num_bufs = vb2_get_num_buffers(q); int ret; if (type != q->type) { @@ -2180,12 +2186,12 @@ int vb2_core_streamon(struct vb2_queue *q, unsigned int type) return 0; } - if (!q->num_buffers) { + if (!q_num_bufs) { dprintk(q, 1, "no buffers have been allocated\n"); return -EINVAL; } - if (q->num_buffers < q->min_buffers_needed) { + if (q_num_bufs < q->min_buffers_needed) { dprintk(q, 1, "need at least %u allocated buffers\n", q->min_buffers_needed); return -EINVAL; @@ -2513,7 +2519,8 @@ void vb2_core_queue_release(struct vb2_queue *q) __vb2_cleanup_fileio(q); __vb2_queue_cancel(q); mutex_lock(&q->mmap_lock); - __vb2_queue_free(q, q->num_buffers); + __vb2_queue_free(q, vb2_get_num_buffers(q)); + q->num_buffers = 0; mutex_unlock(&q->mmap_lock); } EXPORT_SYMBOL_GPL(vb2_core_queue_release); @@ -2542,7 +2549,7 @@ __poll_t vb2_core_poll(struct vb2_queue *q, struct file *file, /* * Start file I/O emulator only if streaming API has not been used yet. */ - if (q->num_buffers == 0 && !vb2_fileio_is_active(q)) { + if (vb2_get_num_buffers(q) == 0 && !vb2_fileio_is_active(q)) { if (!q->is_output && (q->io_modes & VB2_READ) && (req_events & (EPOLLIN | EPOLLRDNORM))) { if (__vb2_init_fileio(q, 1)) @@ -2580,7 +2587,7 @@ __poll_t vb2_core_poll(struct vb2_queue *q, struct file *file, * For output streams you can call write() as long as there are fewer * buffers queued than there are buffers available. */ - if (q->is_output && q->fileio && q->queued_count < q->num_buffers) + if (q->is_output && q->fileio && q->queued_count < vb2_get_num_buffers(q)) return EPOLLOUT | EPOLLWRNORM; if (list_empty(&q->done_list)) { @@ -2629,8 +2636,8 @@ struct vb2_fileio_buf { * struct vb2_fileio_data - queue context used by file io emulator * * @cur_index: the index of the buffer currently being read from or - * written to. If equal to q->num_buffers then a new buffer - * must be dequeued. + * written to. If equal to number of already queued buffers + * then a new buffer must be dequeued. * @initial_index: in the read() case all buffers are queued up immediately * in __vb2_init_fileio() and __vb2_perform_fileio() just cycles * buffers. However, in the write() case no buffers are initially @@ -2640,7 +2647,7 @@ struct vb2_fileio_buf { * buffers. This means that initially __vb2_perform_fileio() * needs to know what buffer index to use when it is queuing up * the buffers for the first time. That initial index is stored - * in this field. Once it is equal to q->num_buffers all + * in this field. Once it is equal to num_buffers all * available buffers have been queued and __vb2_perform_fileio() * should start the normal dequeue/queue cycle. * @@ -2690,7 +2697,7 @@ static int __vb2_init_fileio(struct vb2_queue *q, int read) /* * Check if streaming api has not been already activated. */ - if (q->streaming || q->num_buffers > 0) + if (q->streaming || vb2_get_num_buffers(q) > 0) return -EBUSY; /* @@ -2740,7 +2747,7 @@ static int __vb2_init_fileio(struct vb2_queue *q, int read) /* * Get kernel address of each buffer. */ - for (i = 0; i < q->num_buffers; i++) { + for (i = 0; i < vb2_get_num_buffers(q); i++) { /* vb can never be NULL when using fileio. */ vb = vb2_get_buffer(q, i); @@ -2759,18 +2766,23 @@ static int __vb2_init_fileio(struct vb2_queue *q, int read) /* * Queue all buffers. */ - for (i = 0; i < q->num_buffers; i++) { - ret = vb2_core_qbuf(q, q->bufs[i], NULL, NULL); + for (i = 0; i < vb2_get_num_buffers(q); i++) { + struct vb2_buffer *vb2 = vb2_get_buffer(q, i); + + if (!vb2) + continue; + + ret = vb2_core_qbuf(q, vb2, NULL, NULL); if (ret) goto err_reqbufs; fileio->bufs[i].queued = 1; } /* * All buffers have been queued, so mark that by setting - * initial_index to q->num_buffers + * initial_index to num_buffers */ - fileio->initial_index = q->num_buffers; - fileio->cur_index = q->num_buffers; + fileio->initial_index = vb2_get_num_buffers(q); + fileio->cur_index = fileio->initial_index; } /* @@ -2964,12 +2976,12 @@ static size_t __vb2_perform_fileio(struct vb2_queue *q, char __user *data, size_ * If we are queuing up buffers for the first time, then * increase initial_index by one. */ - if (fileio->initial_index < q->num_buffers) + if (fileio->initial_index < vb2_get_num_buffers(q)) fileio->initial_index++; /* * The next buffer to use is either a buffer that's going to be - * queued for the first time (initial_index < q->num_buffers) - * or it is equal to q->num_buffers, meaning that the next + * queued for the first time (initial_index < num_buffers) + * or it is equal to num_buffers, meaning that the next * time we need to dequeue a buffer since we've now queued up * all the 'first time' buffers. */ @@ -3016,7 +3028,7 @@ static int vb2_thread(void *data) int ret = 0; if (q->is_output) { - prequeue = q->num_buffers; + prequeue = vb2_get_num_buffers(q); copy_timestamp = q->copy_timestamp; } diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c index c6ebc8d2c537..7d798fb15c0b 100644 --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c @@ -627,7 +627,7 @@ struct vb2_buffer *vb2_find_buffer(struct vb2_queue *q, u64 timestamp) * This loop doesn't scale if there is a really large number of buffers. * Maybe something more efficient will be needed in this case. */ - for (i = 0; i < q->num_buffers; i++) { + for (i = 0; i < vb2_get_num_buffers(q); i++) { vb2 = vb2_get_buffer(q, i); if (!vb2) @@ -761,7 +761,7 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create) fill_buf_caps(q, &create->capabilities); validate_memory_flags(q, create->memory, &create->flags); - create->index = q->num_buffers; + create->index = vb2_get_num_buffers(q); if (create->count == 0) return ret != -EBUSY ? ret : 0; -- 2.39.2