Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934290AbdDEUnu (ORCPT ); Wed, 5 Apr 2017 16:43:50 -0400 Received: from nblzone-211-213.nblnetworks.fi ([83.145.211.213]:59414 "EHLO hillosipuli.retiisi.org.uk" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S934075AbdDEUnY (ORCPT ); Wed, 5 Apr 2017 16:43:24 -0400 Date: Wed, 5 Apr 2017 23:43:06 +0300 From: Sakari Ailus To: Gustavo Padovan Cc: Gustavo Padovan , linux-media@vger.kernel.org, Hans Verkuil , Mauro Carvalho Chehab , Laurent Pinchart , Javier Martinez Canillas , linux-kernel@vger.kernel.org Subject: Re: [RFC 00/10] V4L2 explicit synchronization support Message-ID: <20170405204306.GB3265@valkosipuli.retiisi.org.uk> References: <20170313192035.29859-1-gustavo@padovan.org> <20170404113449.GC3288@valkosipuli.retiisi.org.uk> <20170405152457.GD32294@joana> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170405152457.GD32294@joana> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5717 Lines: 128 Hi Gustavo, On Wed, Apr 05, 2017 at 05:24:57PM +0200, Gustavo Padovan wrote: > Hi Sakari, > > 2017-04-04 Sakari Ailus : > > > Hi Gustavo, > > > > Thank you for the patchset. Please see my comments below. > > > > On Mon, Mar 13, 2017 at 04:20:25PM -0300, Gustavo Padovan wrote: > > > From: Gustavo Padovan > > > > > > Hi, > > > > > > This RFC adds support for Explicit Synchronization of shared buffers in V4L2. > > > It uses the Sync File Framework[1] as vector to communicate the fences > > > between kernel and userspace. > > > > > > I'm sending this to start the discussion on the best approach to implement > > > Explicit Synchronization, please check the TODO/OPEN section below. > > > > > > Explicit Synchronization allows us to control the synchronization of > > > shared buffers from userspace by passing fences to the kernel and/or > > > receiving them from the the kernel. > > > > > > Fences passed to the kernel are named in-fences and the kernel should wait > > > them to signal before using the buffer. On the other side, the kernel creates > > > out-fences for every buffer it receives from userspace. This fence is sent back > > > to userspace and it will signal when the capture, for example, has finished. > > > > > > Signalling an out-fence in V4L2 would mean that the job on the buffer is done > > > and the buffer can be used by other drivers. > > > > Shouldn't you be able to add two fences to the buffer, one in and one out? > > I.e. you'd have the buffer passed from another device to a V4L2 device and > > on to a third device. > > > > (Or, two fences per a plane, as you elaborated below.) > > The out one should be created by V4L2 in this case, sent to userspace > and then sent to third device. Another options is what we've been > calling future fences in DRM. Where we may have a syscall to create this > out-fence for us and then we could pass both in and out fence to the > device. But that can be supported later along with what this RFC > proposes. Please excuse my ignorance on fences. I just wanted to make sure that case was also considered. struct v4l2_buffer will run out of space soon so we'll need a replacement anyway. The timecode field is still available for re-use... > > > > > > > > > > Current RFC implementation > > > -------------------------- > > > > > > The current implementation is not intended to be more than a PoC to start > > > the discussion on how Explicit Synchronization should be supported in V4L2. > > > > > > The first patch proposes an userspace API for fences, then on patch 2 > > > we prepare to the addition of in-fences in patch 3, by introducing the > > > infrastructure on vb2 to wait on an in-fence signal before queueing the buffer > > > in the driver. > > > > > > Patch 4 fix uvc v4l2 event handling and patch 5 configure q->dev for vivid > > > drivers to enable to subscribe and dequeue events on it. > > > > > > Patches 6-7 enables support to notify BUF_QUEUED events, i.e., let userspace > > > know that particular buffer was enqueued in the driver. This is needed, > > > because we return the out-fence fd as an out argument in QBUF, but at the time > > > it returns we don't know to which buffer the fence will be attached thus > > > the BUF_QUEUED event tells which buffer is associated to the fence received in > > > QBUF by userspace. > > > > > > Patches 8 and 9 add more fence infrastructure to support out-fences and finally > > > patch 10 adds support to out-fences. > > > > > > TODO/OPEN: > > > ---------- > > > > > > * For this first implementation we will keep the ordering of the buffers queued > > > in videobuf2, that means we will only enqueue buffer whose fence was signalled > > > if that buffer is the first one in the queue. Otherwise it has to wait until it > > > is the first one. This is not implmented yet. Later we could create a flag to > > > allow unordered queing in the drivers from vb2 if needed. > > > > > > * Should we have out-fences per-buffer or per-plane? or both? In this RFC, for > > > simplicity they are per-buffer, but Mauro and Javier raised the option of > > > doing per-plane fences. That could benefit mem2mem and V4L2 <-> GPU operation > > > at least on cases when we have Capture hw that releases the Y frame before the > > > other frames for example. When using V4L2 per-plane out-fences to communicate > > > with KMS they would need to be merged together as currently the DRM Plane > > > interface only supports one fence per DRM Plane. > > > > > > In-fences should be per-buffer as the DRM only has per-buffer fences, but > > > in case of mem2mem operations per-plane fences might be useful? > > > > > > So should we have both ways, per-plane and per-buffer, or just one of them > > > for now? > > > > The data_offset field is only present in struct v4l2_plane, i.e. it is only > > available through using the multi-planar API even if you just have a single > > plane. > > I didn't get why you mentioned the data_offset field. :) I think I meant to continue this but didn't end up writing it down. :-) What I wanted to say that the multi-plane API is a super-set of the single-plane API and there's already a case for not all the functionality being available through the single-plane API. At least I'm ok with adding the per-plane fences to the multi-plane API only. The framework could possibly do more to support the single-plane API as an application interface so that the applications using single-plane API only would get that as a bonus. (Just thinking out loud. Out of scope of this patchset definitely.) -- Kind regards, Sakari Ailus e-mail: sakari.ailus@iki.fi XMPP: sailus@retiisi.org.uk