Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752923AbcD0G5E (ORCPT ); Wed, 27 Apr 2016 02:57:04 -0400 Received: from mail-wm0-f54.google.com ([74.125.82.54]:38228 "EHLO mail-wm0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750785AbcD0G5D convert rfc822-to-8bit (ORCPT ); Wed, 27 Apr 2016 02:57:03 -0400 MIME-Version: 1.0 In-Reply-To: <571FD402.6050407@google.com> References: <1461623608-29538-1-git-send-email-gustavo@padovan.org> <1461623608-29538-6-git-send-email-gustavo@padovan.org> <20160426101050.GN4329@intel.com> <20160426141422.GG7857@joana> <20160426143635.GW8291@phenom.ffwll.local> <20160426162621.GU4329@intel.com> <20160426172049.GB2558@phenom.ffwll.local> <20160426174045.GC4329@intel.com> <20160426182346.GC2558@phenom.ffwll.local> <20160426185506.GH4329@intel.com> <20160426200505.GD2558@phenom.ffwll.local> <571FD402.6050407@google.com> Date: Wed, 27 Apr 2016 07:57:00 +0100 Message-ID: Subject: Re: [RFC v2 5/8] drm/fence: add in-fences support From: Daniel Stone To: Greg Hackmann Cc: =?UTF-8?B?VmlsbGUgU3lyasOkbMOk?= , Gustavo Padovan , Gustavo Padovan , Daniel Stone , Riley Andrews , dri-devel , Linux Kernel Mailing List , =?UTF-8?B?QXJ2ZSBIasO4bm5ldsOlZw==?= , John Harrison Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2605 Lines: 55 Hi, On 26 April 2016 at 21:48, Greg Hackmann wrote: > On 04/26/2016 01:05 PM, Daniel Vetter wrote: >> On Tue, Apr 26, 2016 at 09:55:06PM +0300, Ville Syrjälä wrote: >>> What are they doing that can't stuff the fences into an array >>> instead of props? >> >> The hw composer interface is one in-fence per plane. That's really the >> major reason why the kernel interface is built to match. And I really >> don't think we should diverge just because we have a slight different >> color preference ;-) > > The relationship between layers and fences is only fuzzy and indirect > though. The relationship is really between the buffer you're displaying on > that layer, and the fence representing the work done to render into that > buffer. SurfaceFlinger just happens to bundle them together inside the same > struct hwc_layer_1 as an API convenience. Right, and when using implicit fencing, this comes as a plane property, by virtue of plane -> fb -> buffer -> fence. > Which is kind of splitting hairs as long as you have a 1-to-1 relationship > between layers and DRM planes. But that's not always the case. Can you please elaborate? > A (per-CRTC?) array of fences would be more flexible. And even in the cases > where you could make a 1-to-1 mapping between planes and fences, it's not > that much more work for userspace to assemble those fences into an array > anyway. As Ville says, I don't want to go down the path of scheduling CRTC updates separately, because that breaks MST pretty badly. If you don't want your updates to display atomically, then don't schedule them atomically ... ? That's the only reason I can see for making fencing per-CRTC, rather than just a pile of unassociated fences appended to the request. Per-CRTC fences also forces userspace to merge fences before submission when using multiple planes per CRTC, which is pretty punitive. I think having it semantically attached to the plane is a little bit nicer for tracing (why was this request delayed? -> a fence -> which buffer was that fence for?) at a glance. Also the 'pile of appended fences' model is a bit awkward for more generic userspace, which creates a libdrm request and builds it (add a plane, try it out, wind back) incrementally. Using properties makes that really easy, but without properties, we'd have to add separate codepaths - and thus separate ABI, which complicates distribution - to libdrm to account for a separate plane array which shares a cursor with the properties. So for that reason if none other, I'd really prefer not to go down that route. Cheers, Daniel