Received: by 2002:ac0:aa62:0:0:0:0:0 with SMTP id w31-v6csp453269ima; Fri, 26 Oct 2018 00:40:18 -0700 (PDT) X-Google-Smtp-Source: AJdET5e79so6WHLsKNy+GsPS9r9A7zmgG7lHJGHKlp4TbIlDLvaeY0PHCHKR8Bsw3gPUZmorZqo0 X-Received: by 2002:a62:8490:: with SMTP id k138-v6mr2566136pfd.177.1540539617986; Fri, 26 Oct 2018 00:40:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540539617; cv=none; d=google.com; s=arc-20160816; b=GkciQOHONYcuD5qQuyfUyFHf1Wk2UtzN1gxNMZyAiF9f5DtKOcDcHw0KyTGSccYGEZ ZucLLhUM92V9Mu9BbTQoieYRCErFgF5K81t3M/2KYB0NburW22c1N9kZozuLojxGdVtH bn0e6WRtlGNUK7UAhs1rvHWeK47bKXoVshUU54sG+HuIuVjx522Ebzq4CE5z6CWXNhfk +JSV8XWeifO4zKWkMV1EhiqL/Q6Guskwy/FbH8GQ1YKXU48ZAotUSyMy68zAnIRgETXv fkXF3ycNr+JImoJNvYfEPH2e9j6h6alz9iLpVFsh1QwhBXMV5W1BuLXU6hbq3na+wYxA ixNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=p6KkE+rjaI1+7f/Lq4yQpPzJOIXLxi5MDU20w0FjVo4=; b=Hd2Wk+58KqEkZS7St8DfJOE2s69RyA55FS3ajfdPomRfetwBUOYixmaZush/ScOk5X cr+YDnm84nvE4QE03ahCsIOYaRgpTwl8Tr7Fds+RYylpglJUs+9AIJVwK4dnofO+l2yK BP73+0fBpfWZlsUJ8tKHpq8xBH8ChP8w4K1Z5PWnqLVc3le0lO/9BrSQH2sVw75bYp/L UTiDnZJvwwZ5ZmXV92Vo1YThadhFQC4tKn3hvj3xVGQ7fpDaLWjsx4MEm9jpLUu3cn9y jO0PBok3vuUi//L6Z/j7b9muIlXOX52it/9IOd+NVNGNeNRdZpZMNBOGzd2JoxFshn8V EWzw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=dt46RGHq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m10-v6si10825975pgb.101.2018.10.26.00.40.02; Fri, 26 Oct 2018 00:40:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=dt46RGHq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726686AbeJZQOO (ORCPT + 99 others); Fri, 26 Oct 2018 12:14:14 -0400 Received: from mail-oi1-f174.google.com ([209.85.167.174]:45881 "EHLO mail-oi1-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725983AbeJZQON (ORCPT ); Fri, 26 Oct 2018 12:14:13 -0400 Received: by mail-oi1-f174.google.com with SMTP id e14-v6so167406oie.12 for ; Fri, 26 Oct 2018 00:38:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=p6KkE+rjaI1+7f/Lq4yQpPzJOIXLxi5MDU20w0FjVo4=; b=dt46RGHqhY72Ypw1k4X+X79z6oXXe8ywos8MIFwot1x14FJCOxPV280aa/RVE/KWCV Lb6XFErTnRHk2wAc0NMtOwYP194AMn0TT4/+rnEkEV8Tpxueus+T1rvXZdlBrViNuaWo jZ6qJ/IJQqmxLc4zecbBykZ/mIiPoowqBhp6Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=p6KkE+rjaI1+7f/Lq4yQpPzJOIXLxi5MDU20w0FjVo4=; b=e0g/xL7SnetVqNdvpesL8sc65iwUrJ8SWPRjhX2riHI5Ua1Czz7tjsvhtNzIMHwpee LZXRst8zcExSqCfWdiMOEDNlDW8ZW6sP9LpeTIhTp9tWzmJWZYLDg6b8saZvXIoSYccn w2GnFZLXSeeoEITwXpvbCIpGrVQqGfcjoThUSN129Sq3CL7LGHY9atBCh2SJEn4DqHmY McwQS6zVXeE2Ef3tbzS3DiBKX6diHaqTTtjTvbQIYML9KzgAMbTFpdX6DbndLo1MJSwa HvLAx37sLo1Qdq5wdGH0YZHkIpFcFqs4L4X/9lY32D99iDRYJC2vaW/pFOnO+CwtLeHA CCwQ== X-Gm-Message-State: AGRZ1gJt61yXW8rnp79lA6owRXAmGPIY6mT9OD6NBzTaZiEZvRcEeDOo zuaic5lrRL/03BnCEANzOaGemb19x2J8eg== X-Received: by 2002:aca:6103:: with SMTP id v3-v6mr1360741oib.262.1540539497255; Fri, 26 Oct 2018 00:38:17 -0700 (PDT) Received: from mail-ot1-f44.google.com (mail-ot1-f44.google.com. [209.85.210.44]) by smtp.gmail.com with ESMTPSA id 110sm3821220otj.19.2018.10.26.00.38.16 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 26 Oct 2018 00:38:16 -0700 (PDT) Received: by mail-ot1-f44.google.com with SMTP id q25so180436otn.12 for ; Fri, 26 Oct 2018 00:38:16 -0700 (PDT) X-Received: by 2002:a9d:5098:: with SMTP id b24mr672209oth.155.1540539496039; Fri, 26 Oct 2018 00:38:16 -0700 (PDT) MIME-Version: 1.0 References: <20181019080928.208446-1-acourbot@chromium.org> <515520e4-51d6-e4bb-138a-84453ea6e189@xs4all.nl> In-Reply-To: <515520e4-51d6-e4bb-138a-84453ea6e189@xs4all.nl> From: Alexandre Courbot Date: Fri, 26 Oct 2018 16:38:04 +0900 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [RFC] Stateless codecs: how to refer to reference frames To: Hans Verkuil Cc: Tomasz Figa , Paul Kocialkowski , Mauro Carvalho Chehab , Pawel Osciak , Linux Media Mailing List , LKML Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Hans, On Wed, Oct 24, 2018 at 6:52 PM Hans Verkuil wrote: > > HI Alexandre, > > On 10/24/2018 10:16 AM, Alexandre Courbot wrote: > > Hi Hans, > > > > On Fri, Oct 19, 2018 at 6:40 PM Hans Verkuil wrote: > >> > >> From Alexandre's '[RFC PATCH v3] media: docs-rst: Document m2m stateless > >> video decoder interface': > >> > >> On 10/19/18 10:09, Alexandre Courbot wrote: > >>> Two points being currently discussed have not been changed in this > >>> revision due to lack of better idea. Of course this is open to change: > >> > >> > >> > >>> * The other hot topic is the use of capture buffer indexes in order to > >>> reference frames. I understand the concerns, but I doesn't seem like > >>> we have come with a better proposal so far - and since capture buffers > >>> are essentially well, frames, using their buffer index to directly > >>> reference them doesn't sound too inappropriate to me. There is also > >>> the restriction that drivers must return capture buffers in queue > >>> order. Do we have any concrete example where this scenario would not > >>> work? > >> > >> I'll stick to decoders in describing the issue. Stateless encoders probably > >> do not have this issue. > >> > >> To recap: the application provides a buffer with compressed data to the > >> decoder. After the request is finished the application can dequeue the > >> decompressed frame from the capture queue. > >> > >> In order to decompress the decoder needs to access previously decoded > >> reference frames. The request passed to the decoder contained state > >> information containing the buffer index (or indices) of capture buffers > >> that contain the reference frame(s). > >> > >> This approach puts restrictions on the framework and the application: > >> > >> 1) It assumes that the application can predict the capture indices. > >> This works as long as there is a simple relationship between the > >> buffer passed to the decoder and the buffer you get back. > >> > >> But that may not be true for future codecs. And what if one buffer > >> produces multiple capture buffers? (E.g. if you want to get back > >> decompressed slices instead of full frames to reduce output latency). > >> > >> This API should be designed to be future-proof (within reason of course), > >> and I am not at all convinced that future codecs will be just as easy > >> to predict. > >> > >> 2) It assumes that neither drivers nor applications mess with the buffers. > >> One case that might happen today is if the DMA fails and a buffer is > >> returned marked ERROR and the DMA is retried with the next buffer. There > >> is nothing in the spec that prevents you from doing that, but it will mess > >> up the capture index numbering. And does the application always know in > >> what order capture buffers are queued? Perhaps there are two threads: one > >> queueing buffers with compressed data, and the other dequeueing the > >> decompressed buffers, and they are running mostly independently. > >> > >> > >> I believe that assuming that you can always predict the indices of the > >> capture queue is dangerous and asking for problems in the future. > >> > >> > >> I am very much in favor of using a dedicated cookie. The application sets > >> it for the compressed buffer and the driver copies it to the uncompressed > >> capture buffer. It keeps track of the association between capture index > >> and cookie. If a compressed buffer decompresses into multiple capture > >> buffers, then they will all be associated with the same cookie, so > >> that simplifies how you refer to reference frames if they are split > >> over multiple buffers. > >> > >> The codec controls refer to reference frames by cookie(s). > > > > So as discussed yesterday, I understand your issue with using buffer > > indexes. The cookie idea sounds like it could work, but I'm afraid you > > could still run into issues when you don't have buffer symmetry. > > > > For instance, imagine that the compressed buffer contains 2 frames > > worth of data. In this case, the 2 dequeued capture buffers would > > carry the same cookie, making it impossible to reference either frame > > unambiguously. > > But this is a stateless codec, so each compressed buffer contains only > one frame. That's the responsibility of the bitstream parser to ensure > that. Just as we are making the design future-proof by considering the case where we get one buffer per slice, shouldn't we think about the (currently hypothetical) case of a future codec specification in which slices contain information that is relevant for several consecutive frames? It may be a worthless design as classic reference frames are probably enough to carry redundant information, but wanted to point the scenario just in case. > > The whole idea of the stateless codec is that you supply only one frame > at a time to the codec. > > If someone indeed puts multiple frames into a single buffer, then > the behavior is likely undefined. Does anyone have any idea what > would happen with the cedrus driver in that case? This is actually > a good test. > > Anyway, I would consider this an application bug. Garbage in, garbage out. Yeah, at least for the existing codecs this should be a bug. > > > > > There may also be a similar, yet simpler solution already in place > > that we can use. The v4l2_buffer structure contains a "sequence" > > member, that is supposed to sequentially count the delivered frames. > > The sequence field suffers from exactly the same problems as the > buffer index: it doesn't work if one compressed frame results in > multiple capture buffers (one for each slice), since the sequence > number will be increased for each capture buffer. Also if capture > buffers are marked as error for some reason, the sequence number is > also incremented for that buffer, again making it impossible to > predict in userspace what the sequence counter will be. Well if we get one capture buffer per slice, user-space can count them just as well as in the one buffer per frame scenario. That being said, I agree that requiring user-space to keep track of that could be tricky. Lose track once, and all your future reference frames will use an incorrect buffer. So cookies it is, I guess! I will include them in the next version of the RFC. Cheers, Alex.