Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp4387347yba; Wed, 17 Apr 2019 10:22:00 -0700 (PDT) X-Google-Smtp-Source: APXvYqwswOZAwV+8RHNmIL9xARGQsLE/85khkDkWiHJqqg5K43jIDyEI3++sSBcs07oAGB4HBotX X-Received: by 2002:aa7:85d9:: with SMTP id z25mr90545074pfn.31.1555521719913; Wed, 17 Apr 2019 10:21:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555521719; cv=none; d=google.com; s=arc-20160816; b=FMC6/0sNfApW41G5UQRTiYdUPETVYDqbtQJth9AY/sbeXvvu75bvqPbdeykx+l1WLS Fpb4P13JhhvE0QZ5WQ6ewwhWikvrolBIGAafSaGtpR/hLKVk8SHeYnyOd9Oa+sCaXuX9 JsoYUTWL9Jp9PEvgKUc0cnkhPz9ix45xNX2FKHeeMqHIsx/NmhLp2hkx5eh9yT+K9I/G w1frjgZ/TOCc7ZC1eWL0XYYqL8tYKC61TsLZ/kI9445gUdKd1S9QasSEH+eErsKHLdoS OPXLm74naTxsC5h9rM1nKZkRAvu0qoDO6T8BfjkljGavfswZNvJak3iWYrSYKeH0DDuS Stow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:cc:to:from:subject :message-id; bh=4dE/eVktlkKpHmz8b7eIqN1nRfS2mWKambRO+TKv/6k=; b=pLHKAy4bVASKv386BmJZ551he2nxXl2537C/t37D0H8g6/YOUBFkd1NLptB0IRghn1 eohieNyJBBSFEC00JieJ9/gBVR5vgYd7BVRX6GXNgUNtyPTxsI2OAIiA2SwDc8qK0/zb l5s3614trNuaks3aX96E1F3uNPSz7/xdlU3Ayt9jSxMpw6gpU/dXb6ARFiDnhikM9cDF yadQ1uTEW+fOA1Q0XnEvNn4EsNm61QrYZ8k6URyoYeUY3krMWOxsO+riPvFMLm2N4uzB wWzIOFwDUleza5TfKD1VIac54gBErEbIVfb4lw1hstCShvNexR3znAa37V+CVOdNPdwB 7I5Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k75si50882321pgc.515.2019.04.17.10.21.44; Wed, 17 Apr 2019 10:21:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733025AbfDQRSp (ORCPT + 99 others); Wed, 17 Apr 2019 13:18:45 -0400 Received: from relay10.mail.gandi.net ([217.70.178.230]:49149 "EHLO relay10.mail.gandi.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729395AbfDQRSp (ORCPT ); Wed, 17 Apr 2019 13:18:45 -0400 Received: from collins (196.109.29.93.rev.sfr.net [93.29.109.196]) (Authenticated sender: paul.kocialkowski@bootlin.com) by relay10.mail.gandi.net (Postfix) with ESMTPSA id 07BFE24000F; Wed, 17 Apr 2019 17:18:40 +0000 (UTC) Message-ID: Subject: Re: [PATCH v4] media: docs-rst: Document m2m stateless video decoder interface From: Paul Kocialkowski To: Nicolas Dufresne , Alexandre Courbot Cc: Tomasz Figa , Maxime Ripard , Hans Verkuil , Dafna Hirschfeld , Mauro Carvalho Chehab , Linux Media Mailing List , LKML Date: Wed, 17 Apr 2019 19:18:40 +0200 In-Reply-To: <88b90a267f40d71b00e29d5d063fc26407104093.camel@ndufresne.ca> References: <20190306080019.159676-1-acourbot@chromium.org> <371df0e4ec9e38d83d11171cbd98f19954cbf787.camel@ndufresne.ca> <439b7f57aa3ba2b2ed5b043f961ef87cb83912af.camel@ndufresne.ca> <59e23c5ca5bfbadf9441ea06da2e9b9b5898c6d7.camel@bootlin.com> <0b495143bb260cf9f8927ee541e7f001842ac5c3.camel@ndufresne.ca> <88b90a267f40d71b00e29d5d063fc26407104093.camel@ndufresne.ca> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.32.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, Le mercredi 17 avril 2019 à 12:06 -0400, Nicolas Dufresne a écrit : > Le mardi 16 avril 2019 à 16:22 +0900, Alexandre Courbot a écrit : > > On Tue, Apr 16, 2019 at 12:30 AM Nicolas Dufresne wrote: > > > Le lundi 15 avril 2019 à 15:26 +0200, Paul Kocialkowski a écrit : [...] > > Thanks for this great discussion. Let me try to summarize the status > > of this thread + the IRC discussion and add my own thoughts: > > > > Proper support for multiple decoding units (e.g. H.264 slices) per > > frame should not be an afterthought ; compliance to encoded formats > > depend on it, and the benefit of lower latency is a significant > > consideration for vendors. > > > > m2m, which we use for all stateless codecs, has a strong assumption > > that one OUTPUT buffer consumed results in one CAPTURE buffer being > > produced. This assumption can however be overruled: at least the venus > > driver does it to implement the stateful specification. > > The m2m framework code, which is quite minimal, has this limitation, > but it has nothing to do with the userspace M2M interface. In > userspace, M2M are just two asynchronous queues. New input data is > queued on the OUTPUT queue, and results is taken from the CAPTURE > queue. There is nothing in the API or the spec that limits how many > input data (OUTPUT queue) will be used to produce a number of results > (CAPTURE queue). > > > So we need a way to specify frame boundaries when submitting encoded > > content to the driver. One request should contain a single OUTPUT > > buffer, containing a single decoding unit, but we need a way to > > specify whether the driver should directly produce a CAPTURE buffer > > from this request, or keep using the same CAPTURE buffer with > > subsequent requests. > > Yes, that's a good recap, we need a way. Just a clarification, we need > a way for formats similar to H264/H265 for which the frame boundary is > often only discovered by parsing the following NAL or signalled through > a container. > > > I can think of 2 ways this can be expressed: > > 1) We keep the current m2m behavior as the default (a CAPTURE buffer > > is produced), and add a flag to ask the driver to change that behavior > > and hold on the CAPTURE buffer and reuse it with the next request(s) ; > > 2) We specify that no CAPTURE buffer is produced by default, unless a > > flag asking so is specified. > > I don't think 1) is really a valid option. A buffer has one state. In > current implementation of Cedrus, when 1 unit is decoded (1 slice) the > capture buffer is marked as DONE. That signals any userspace polling > for capture buffer being ready to DQ. Now, if you drive the OUTPUT and > CAPTURE queue from separate thread, you end up with a race where > userspace thinks the buffer is ready but a new slice comes in, so the > state has been cleared between the poll returning and the call to DQ > buf. User-space will unexpectedly endup doing a blocking DQBuf which is > likely unwanted. Then if we leave is in DONE state, it's much worst, > since there is no way to signal that the buffer is ready (the decoding > the unit has completed). > > As this API does not exist yet, introducing 2) is possible and is much > saner to handle from userspace. The benefit is that you have no > special case. The driver just hold on marking the buffer DONE until it > has processed all unit up to one that had a frame completion flag on > it. Mhh, without explicitly marking the requests as part of the same group, we would basically lose the ability to function stateless with this idea. By that, I mean that you can no longer schedule a request that's not part of the current batch of slices, since the kernel-side would refrain from marking the capture buffer done for that request or mark the capture buffer for the slices along with the completion of that request. One way or another, I think we need to explicitly mark the requests for the each slice of the frame as being part of the same group. And while at it, we might want to make that possible for any use case that might require it, not just video decoding. The request API is quite complex on its own and I guess we're dealing with complex topics that involve significantly changing the behavior of v4l2. Not to say that we shouldn't try to keep things simple, we definitely should, but it might be one of the cases where the long-term solution is not going to be that easy to carry out. The way I see it, the media userspace interface was enriched with the request API to deal with a specific case (associating source data and associated meta-data) and managed to bring a generic and flexible solution that also covers other use cases. So I feel like solving this issue would be best done at the media level and in a way that's not too centered on our specific use case. > > The flag could be specified in one of two ways: > > a) As a new v4l2_buffer.flag for the OUTPUT buffer ; > > b) As a dedicated control, either format-specific or more common to all codecs. > > > > I tend to favor 2) and b) for this, for the reason that with H.264 at > > least, user-space does not know whether a slice is the last slice of a > > frame until it starts parsing the next one, and we don't know when we > > will receive it. If we use a control to ask that a CAPTURE buffer be > > produced, we can always submit another request with only that control > > set once it is clear that the frame is complete (and not delay > > decoding meanwhile). In practice I am not that familiar with > > latency-sensitive streaming ; maybe a smart streamer would just append > > an AUD NAL unit at the end of every frame and we can thus submit the > > flag it with the last slice without further delay? > > AUD NAL, when present, are the first NAL of a frame, so latency wise it > is useless. So what we do is that we rely on the encoder to tell us. So > encoders will set a flag to signal the last slice of a frame. If you > are doing RTP, this flags is converted into a marker bit (RTP > specific). This marker bit is then received on the other side and > passed to the decoder. The decoder will process the slice and when this > is done will immediately deliver the resulting frame (if reordering > allow). If it's not present, it will wait for the next slice in order > to determine if the decoded frame can be delivered or not. So without > the marker, we effectively have 1 extra frame latency in the worst > case. > > What I like of the b) proposal is that we can invert the logic and > effectively abstract this completely for formats that don't have slices > (or equivalent) while having this implemented generically. > > What I had in mind was a) because I was thinking that we could reuse > the flag for stateful encoder/decoder in order to support the RTP > marker bit usecase and slice level streaming. Right now, we only do > full frame streaming, but it's limiting. the ZyncMP firmware that > Micheal is integrating does support low latency with slice processing, > so to match the vendor driver capacity we'll need that flag anyway. > > But in stateless, it's easier, because not setting it at all simply > introduce more latency, while for accelerators we would like to make > the closing of a frame mandatory. So I'm totally fine with a different > mechanism. Again, this is handled in VAAPI and other similar API by > having begin/end function for frames, and then a number of > decode_slice() calls in the middle. So there is an extra context for > frames on top of slices in these API. > > > An extra constraint to enforce would be that each decoding unit > > belonging to the same frame must be submitted with the same timestamp, > > otherwise the request submission would fail. We really need a > > framework to enforce all this at a higher level than individual > > drivers, once we reach an agreement I will start working on this. > > I agree with that. And adding checks for this would be really welcome > to catch errors. Agreed, this feels like a sane thing to do. > > Formats that do not support multiple decoding units per frame would > > reject any request that does not carry the end-of-frame information. > > Again, we *could* also reverse the logic, so that by default all OUTPUT > buffer would be considered complete frames. So far I only know 3 > formats that have this feature, H264, H265 and AV1. I'm not sure for > VP9, I would need to look. But clearly JPEG, VP8, H263, raw format and > more don't seem to have this. We could also have a generic control/flag > and make it mandatory for specific formats if that is simpler. Well, now I am pretty much convinced that what's in the OUTPUT buffer should be one entity of the decoding unit, which could be a slice or coded data for one frame depending on the format. I think adding a control or a flag for that would bring-in too much boilerplate and different cases to handle in the driver, which does not feel that good. I'm still open to introducing specific formats for submitting enough slice data for a whole frame, but not sure it is worth the effort. Cheers, Paul