Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp6131349pxv; Thu, 29 Jul 2021 07:14:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzNwuVRJis3SoxcNxLxyIHOweQsecmKtw60LtAG+e1al6qelqDPoydhtKcFlqAHLF+qdlSg X-Received: by 2002:a17:906:c304:: with SMTP id s4mr4771733ejz.346.1627568089699; Thu, 29 Jul 2021 07:14:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627568089; cv=none; d=google.com; s=arc-20160816; b=nMQseLdcBUcNwQ320Ri2yPlUVltYWB+0K8XbFA4E3t0aNDBEZTGmm9A/zKWw8EjorB /EsC9OXhy7w0axy5mMmdcseOGMJ9A2KsL1N+6ECn0maBFbI1ZldGFg/hOwuoc7v2lBnd 0ioRnYhaKr8b/cNf3hehTT/Tp5JO+/t5AVZKlX8eCuUYwquzPpNY6uNkAgt+vOT13fuz Qh4rTM6YGMd5VdKAXFFwA0URWYesRsbG3oX9cOJFj9yyRDwlbcyXrWONJOfPEbeYIlaJ NQa6N5AMsW9xqPRXdpGkmimtxRbBnPqQg4P9JGZC4kDvXrE4C7CkN8U671hGJ+AQy/9r DViw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=ERFue+oHCOjcFjpKybCHGm8Nf73ebXktzSux0cMnuM0=; b=vpbI2Qg47+LTAtifTey+oWc9NsNTI104kwNTjUSfOG4gl5nMe0irfuNv1JTaWXhwgS dYetnKUIvYpzYIlLijYL1tE39OA2hHj+gxPVN29M7D2QwR5gpDvgGRirigd76UnaZxVO xbi2YD0zXYJwpHnQoGxsh0FbiajQ+NQEL7mHpc/XOtrrS33PrHtCzQOrEH9DG1M6/Gx4 Twhc7ygeJenRcU0SFds13s/erCw9KtkPfrVrQwDILGz1DRMW7QG68fptTdIZkOhhswKS AiJxSuNNIlkyiMeEEPCypb21fm7YhyMvIVrZ/BXlBRucYrB4ejpU0YaRs8rzj+0uQ3ZV hZqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=RvIXERV7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c8si2797553ede.304.2021.07.29.07.14.25; Thu, 29 Jul 2021 07:14:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=RvIXERV7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238964AbhG2OJ7 (ORCPT + 99 others); Thu, 29 Jul 2021 10:09:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239359AbhG2OIi (ORCPT ); Thu, 29 Jul 2021 10:08:38 -0400 Received: from mail-oi1-x235.google.com (mail-oi1-x235.google.com [IPv6:2607:f8b0:4864:20::235]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51385C061383 for ; Thu, 29 Jul 2021 07:06:05 -0700 (PDT) Received: by mail-oi1-x235.google.com with SMTP id y18so8602954oiv.3 for ; Thu, 29 Jul 2021 07:06:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=ERFue+oHCOjcFjpKybCHGm8Nf73ebXktzSux0cMnuM0=; b=RvIXERV7oStEjYkk/BrebgVpiq6/8GMBlhwGRXTlJXtmQD0+0FSp6vKd3U3rx72vnb Dskbh9JEQHgZzaz14qnJiRv0Z85Z8/nI+jSd2GcG3uZQGbUV7HY1dYkpF40rHuOH/tOy t++dhn7XmaIjhp7fKjyA8LkXn0d40y+ELabsU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=ERFue+oHCOjcFjpKybCHGm8Nf73ebXktzSux0cMnuM0=; b=MYTfkrv9uRMosJeDzoc6gV2rOosbI7dtM7xR3YKiB36kJFuls632xk5+JjO/bVvzGk hgn5y9+8qjhqe9cZSaKF8VB69vouBSWlMRhWjTPJeQcqMlHZUI7nVMWPxDDJoKm5TGya fSzOQi72uTIR0+qySj2cJiyRIhEgwaH9SKJ8UYIUzKIdimea7QqljDIxGrhqLXJtEV1O hjCEEDs+k0t1xt3Aw75r+53nFfHcBpsoz30ZQYI1n68NDtUlebxPEBuJofBGdTY2nVup nSDzH/8HNGxv35qcsyhKjbkPC6+/fT76lY9K24S83zbto9EOmZ3gAXVTcr1Qn3275pFl JgbA== X-Gm-Message-State: AOAM531QSKweRWyNsygbj2vSlOcNCQBw94dsaihuauV1kk/1rpUdw+rm bvohTUB6XW5Xw9V0H51J8xwDmtfw2nXAST/rP5tKpw== X-Received: by 2002:aca:d682:: with SMTP id n124mr3260319oig.128.1627567564541; Thu, 29 Jul 2021 07:06:04 -0700 (PDT) MIME-Version: 1.0 References: <04d44873-d8e6-6ae7-f0f9-17bcb484d697@amd.com> <9d5f4415-d470-3bc1-7d52-61ba739706ae@daenzer.net> <20210729123732.3259a9bf@eldfell> <20210729155959.665fc1a6@eldfell> In-Reply-To: <20210729155959.665fc1a6@eldfell> From: Daniel Vetter Date: Thu, 29 Jul 2021 16:05:53 +0200 Message-ID: Subject: Re: [RFC 0/4] dma-fence: Deadline awareness To: Pekka Paalanen Cc: Simon Ser , =?UTF-8?Q?Michel_D=C3=A4nzer?= , Rob Clark , Matthew Brost , Roy Sun , =?UTF-8?Q?Christian_K=C3=B6nig?= , open list , dri-devel , "moderated list:DMA BUFFER SHARING FRAMEWORK" , Luben Tuikov , Gustavo Padovan , Alex Deucher , Tian Tao , Lee Jones , =?UTF-8?Q?Christian_K=C3=B6nig?= , "open list:DMA BUFFER SHARING FRAMEWORK" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 29, 2021 at 3:00 PM Pekka Paalanen wrote: > On Thu, 29 Jul 2021 14:18:29 +0200 > Daniel Vetter wrote: > > > On Thu, Jul 29, 2021 at 12:37:32PM +0300, Pekka Paalanen wrote: > > > On Thu, 29 Jul 2021 11:03:36 +0200 > > > Daniel Vetter wrote: > > > > > > > On Thu, Jul 29, 2021 at 10:17:43AM +0200, Michel D=C3=A4nzer wrote: > > > > > On 2021-07-29 9:09 a.m., Daniel Vetter wrote: > > > > > > On Wed, Jul 28, 2021 at 08:34:13AM -0700, Rob Clark wrote: > > > > > >> On Wed, Jul 28, 2021 at 6:24 AM Michel D=C3=A4nzer wrote: > > > > > >>> On 2021-07-28 3:13 p.m., Christian K=C3=B6nig wrote: > > > > > >>>> Am 28.07.21 um 15:08 schrieb Michel D=C3=A4nzer: > > > > > >>>>> On 2021-07-28 1:36 p.m., Christian K=C3=B6nig wrote: > > > > > >>>>>> Am 27.07.21 um 17:37 schrieb Rob Clark: > > > > > >>>>>>> On Tue, Jul 27, 2021 at 8:19 AM Michel D=C3=A4nzer wrote: > > > > > >>>>>>>> On 2021-07-27 5:12 p.m., Rob Clark wrote: > > > > > >>>>>>>>> On Tue, Jul 27, 2021 at 7:50 AM Michel D=C3=A4nzer wrote: > > > > > >>>>>>>>>> On 2021-07-27 1:38 a.m., Rob Clark wrote: > > > > > >>>>>>>>>>> From: Rob Clark > > > > > >>>>>>>>>>> > > > > > >>>>>>>>>>> Based on discussion from a previous series[1] to add = a "boost" mechanism > > > > > >>>>>>>>>>> when, for example, vblank deadlines are missed. Inst= ead of a boost > > > > > >>>>>>>>>>> callback, this approach adds a way to set a deadline = on the fence, by > > > > > >>>>>>>>>>> which the waiter would like to see the fence signalle= d. > > > > > > ... > > > > > > > > I'm not questioning that this approach helps when there's a direc= t > > > > > chain of fences from the client to the page flip. I'm pointing ou= t > > > > > there will not always be such a chain. > > > > > > > > > > > > > > > >> But maybe the solution to make this also useful for mutter > > > > > > > > > > It's not just mutter BTW. I understand gamescope has been doing > > > > > this for some time already. And there seems to be consensus among > > > > > developers of Wayland compositors that this is needed, so I expec= t > > > > > at least all the major compositors to do this longer term. > > > > > > > > > > > > > > > >> is to, once we have deadline support, extend it with an ioctl = to > > > > > >> the dma-fence fd so userspace can be the one setting the > > > > > >> deadline. > > > > > > > > > > I was thinking in a similar direction. > > > > > > > > > > > atomic ioctl with TEST_ONLY and SET_DEADLINES? Still gives mutt= er > > > > > > the option to bail out with an old frame if it's too late? > > > > > > > > > > This is a bit cryptic though, can you elaborate? > > > > > > > > So essentially when the mutter compositor guesstimator is fairly > > > > confident about the next frame's composition (recall you're keeping > > > > track of clients to estimate their usual latency or something like > > > > that), then it does a TEST_ONLY commit to check it all works and pr= ep > > > > the rendering, but _not_ yet fire it off. > > > > > > > > Instead it waits until all buffers complete, and if some don't, pic= k > > > > the previous one. Which I guess in an extreme case would mean you > > > > need a different window tree configuration and maybe different > > > > TEST_ONLY check and all that, not sure how you solve that. > > > > > > > > Anyway, in that TEST_ONLY commit my idea is that you'd also supply > > > > all the in-fences you expect to depend upon (maybe we need an > > > > additional list of in-fences for your rendering job), plus a deadli= ne > > > > when you want to have them done (so that there's enough time for yo= ur > > > > render job still). And the kernel then calls dma_fence_set_deadline > > > > on all of them. > > > > > > > > Pondering this more, maybe a separate ioctl is simpler where you ju= st > > > > supply a list of in-fences and deadlines. > > > > > > > > The real reason I want to tie this to atomic is for priviledge > > > > checking reasons. I don't think normal userspace should have the > > > > power to set arbitrary deadlines like this - at least on i915 it wi= ll > > > > also give you a slight priority boost and stuff like that, to make > > > > sure your rendering for the current frame goes in ahead of the next > > > > frame's prep work. > > > > > > > > So maybe just a new ioctl that does this which is limited to the > > > > current kms owner (aka drm_master)? > > > > > > Yeah. > > > > > > Why not have a Wayland compositor *always* "set the deadlines" for th= e > > > next screen update as soon as it gets the wl_surface.commit with the > > > new buffer and fences (a simplified description of what is actually > > > necessary to take a new window state set into use)? > > > > Yeah taht's probably best. And if the frame is scheduled (video at 24fp= s > > or whatever) you can also immediately set the deadline for that too, ju= st > > a few frames later. Always minus compositor budget taken into account. > > > > > The Wayland client posted the frame to the compositor, so surely it > > > wants it ready and displayed ASAP. If we happen to have a Wayland fra= me > > > queuing extension, then also take that into account when setting the > > > deadline. > > > > > > Then, *independently* of that, the compositor will choose which frame= s > > > it will actually use in its composition when the time comes. > > > > > > No need for any KMS atomic commit fiddling, userspace just explicitly > > > sets the deadline on the fence and that's it. You could tie the > > > privilege of setting deadlines to simply holding DRM master on whatev= er > > > device? So the ioctl would need both the fence and any DRM device fd. > > > > Yeah tying that up with atomic doesn't make sense. > > > > > A rogue application opening a DRM device and becoming DRM master on i= t > > > just to be able to abuse deadlines feels both unlikely and with > > > insignificant consequences. It stops the obvious abuse, and if someon= e > > > actually goes the extra effort, then so what. > > > > With logind you can't become drm master just for lolz anymore, so I'm n= ot > > worried about that. On such systems only logind has the rights to acces= s > > the primary node, everyone doing headless goes through the render node. > > Mm, I hope the DRM leasing protocols don't rely on clients being able > to open KMS nodes anymore... they used to at some point, I think, for > the initial resource discovery before actually leasing anything. Yeah I thought that was fixed with additional xrandr/wayland discovery protocols. It doesn't work anyone on systems with display/render split. I think that was just to get it all going. > "only logind has rights" might be a bit off still. > > > So just limiting the deadline ioctl to current kms owner is imo perfect= ly > > good enough for a security model. > > There could be multiple DRM devices. Including VKMS. Some of them not > used. The deadline setting ioctl can't guarantee the fenced buffer is > going to be used on the same DRM device the ioctl was called with. Or > used at all with KMS. That's not a problem, fence deadline interface is cross-driver. > Anyway, even if that is not completely secure, I wouldn't think that > setting deadlines can do more than change GPU job priorities and power > consumption, which seem quite benign. It's enough hoops to jump through > that I think it stops everything we care to stop. Yeah. Plus with this patch set you can do this already, just need to send out an atomic flip with all the fences merged together into your in-fence slots. -Daniel --=20 Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch