Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3742991pxj; Tue, 11 May 2021 10:54:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzA5osp5hQWDCoVMeau+dGzGct8nlT4bxUC9WAyIugmB0mSeda2HVe5W/n8W1cs10OE8fd3 X-Received: by 2002:a17:907:7634:: with SMTP id jy20mr33031500ejc.553.1620755687930; Tue, 11 May 2021 10:54:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620755687; cv=none; d=google.com; s=arc-20160816; b=QZZPwvG938gSvCqQcv1AtCTRsrAZIRrVbWCuSetXlK9J+ALC5Jt6QQ2e1bC3QksxUe JDmNIqUwssiC/zMOQ9wZ4SoGnc64CZH/GjzlkkzWZmiS65b03S+lQLiQ30gCSGQSDmEv Fxs58gJdl7Nk05IxOfjodA7rnkm5IkeNqyF96l03DTSkoCy+aFWoeP1ELcKTbNPlNsFN EGOvio2VcRWqANWvJLQxD6XAVJNLpnvc3H+myBWTQkVPuRQrtqJBVBDbVtL0OO5PCroD Q17jfWXu38IpcKcQmLZjWbS/2kd3UQpW56c0aGuJIoBQovStcYzhyMQTCDGzNykDNHoG q3eA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:mail-followup-to:message-id:subject:cc:to:from:date :dkim-signature; bh=7HSUBD2/rtpZe3Hijo7T/o91BYv8IBDoWcNdvc16XBk=; b=TDBhjDKxSbSBxgE8EL1xj7VjTbKA/uq2uKOOxDDq9+848+YsEKjobwPDPE7DJfPC1a NncnRwx6onIqvdzbCLJ1hqGHWzhfxuIAjx37ld6VLUsQIvs8RdAkAXogjxkdq5vn2cLN cU4nQaOBk+PF+J45tjEG+jdJyQdiPi2YU6d42oOcL0IdPgRnubkbEASJuZHWP2B7/hqz dPUesMxxud6MubJDUH3PPOarhnWYpuVd0WJw5ZilgK82NIeGc87afAIoMUOhAhSoo6V/ SGDNslVBH9k6238sFKoI9kNyStuzU1ombdTztC5Lv3efg3eDYjXMrggMby85Dz82t3Xk FQ7g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=JrNT5Jh9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p6si3037456ejo.526.2021.05.11.10.54.22; Tue, 11 May 2021 10:54:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=JrNT5Jh9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231454AbhEKRvz (ORCPT + 99 others); Tue, 11 May 2021 13:51:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231329AbhEKRvy (ORCPT ); Tue, 11 May 2021 13:51:54 -0400 Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [IPv6:2a00:1450:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81FB3C061574 for ; Tue, 11 May 2021 10:50:46 -0700 (PDT) Received: by mail-ed1-x52d.google.com with SMTP id f1so1668459edt.4 for ; Tue, 11 May 2021 10:50:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=date:from:to:cc:subject:message-id:mail-followup-to:references :mime-version:content-disposition:in-reply-to; bh=7HSUBD2/rtpZe3Hijo7T/o91BYv8IBDoWcNdvc16XBk=; b=JrNT5Jh9xJwOiLlt1fW9So9FN0HAliXtQa8yvUafImI4uGN2R5qNa8S0XKuU0wKJp8 YWbZEHCHndexapqtNiPJGbUebSDP+qULD8pGTtje6gi9XgcyQY0FUV9CRw3kWm41kHsY RjOCNLtFqwv9shPskTTxwXirwmR7F7tFFIg+Y= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id :mail-followup-to:references:mime-version:content-disposition :in-reply-to; bh=7HSUBD2/rtpZe3Hijo7T/o91BYv8IBDoWcNdvc16XBk=; b=oXGGSuvGhNR/eQrBdDRvsJqq9VhgBMzcwgfib5qwJETQ5enGEIbsYxmmActgJZFapG Z9q2QPloZmiF7SvU9ykGlpzNZaBaL2kr2HKLSqV87wFsrMWWeThOHkckW1ssYRJvmS4v hdKtCE2tRom2dTdP9Dlrg3ABTUFRCNplrrEtYE2sZ7kpzu3bZHY1uh+62IA9P6j3mU0y 073mUiJVFsQDhHXJYK8Grb6YzCk6UrMchxAaOBDjJYmdlxos7EnGPgm/Wci4N02NYLoG aYocIeCL/JP+f7UZvej00bNNLHe/c6bfQax7G8c7C8hYAsW2cWMztFz0kBelrcmz9UGP YCKQ== X-Gm-Message-State: AOAM531jSvduCwJTswiAJGNQCNoZKUDclLqvvCASz5ual+KqMQXwgu5J h5PC/iiDnDusFeWIczQfNVmt/WwiEObZ0w== X-Received: by 2002:a50:a404:: with SMTP id u4mr38019005edb.112.1620755445108; Tue, 11 May 2021 10:50:45 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id z11sm12218458ejc.122.2021.05.11.10.50.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 May 2021 10:50:44 -0700 (PDT) Date: Tue, 11 May 2021 19:50:42 +0200 From: Daniel Vetter To: Rob Clark Cc: dri-devel , Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , open list , Daniel Vetter Subject: Re: [PATCH 1/2] drm: Fix dirtyfb stalls Message-ID: Mail-Followup-To: Rob Clark , dri-devel , Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , open list References: <20210508195641.397198-1-robdclark@gmail.com> <20210508195641.397198-2-robdclark@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Operating-System: Linux phenom 5.10.32scarlett+ Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 11, 2021 at 10:42:58AM -0700, Rob Clark wrote: > On Tue, May 11, 2021 at 10:21 AM Daniel Vetter wrote: > > > > On Tue, May 11, 2021 at 10:19:57AM -0700, Rob Clark wrote: > > > On Tue, May 11, 2021 at 9:44 AM Daniel Vetter wrote: > > > > > > > > On Mon, May 10, 2021 at 12:06:05PM -0700, Rob Clark wrote: > > > > > On Mon, May 10, 2021 at 10:44 AM Daniel Vetter wrote: > > > > > > > > > > > > On Mon, May 10, 2021 at 6:51 PM Rob Clark wrote: > > > > > > > > > > > > > > On Mon, May 10, 2021 at 9:14 AM Daniel Vetter wrote: > > > > > > > > > > > > > > > > On Sat, May 08, 2021 at 12:56:38PM -0700, Rob Clark wrote: > > > > > > > > > From: Rob Clark > > > > > > > > > > > > > > > > > > drm_atomic_helper_dirtyfb() will end up stalling for vblank on "video > > > > > > > > > mode" type displays, which is pointless and unnecessary. Add an > > > > > > > > > optional helper vfunc to determine if a plane is attached to a CRTC > > > > > > > > > that actually needs dirtyfb, and skip over them. > > > > > > > > > > > > > > > > > > Signed-off-by: Rob Clark > > > > > > > > > > > > > > > > So this is a bit annoying because the idea of all these "remap legacy uapi > > > > > > > > to atomic constructs" helpers is that they shouldn't need/use anything > > > > > > > > beyond what userspace also has available. So adding hacks for them feels > > > > > > > > really bad. > > > > > > > > > > > > > > I suppose the root problem is that userspace doesn't know if dirtyfb > > > > > > > (or similar) is actually required or is a no-op. > > > > > > > > > > > > > > But it is perhaps less of a problem because this essentially boils > > > > > > > down to "x11 vs wayland", and it seems like wayland compositors for > > > > > > > non-vsync'd rendering just pageflips and throws away extra frames from > > > > > > > the app? > > > > > > > > > > > > Yeah it's about not adequately batching up rendering and syncing with > > > > > > hw. bare metal x11 is just especially stupid about it :-) > > > > > > > > > > > > > > Also I feel like it's not entirely the right thing to do here either. > > > > > > > > We've had this problem already on the fbcon emulation side (which also > > > > > > > > shouldn't be able to peek behind the atomic kms uapi curtain), and the fix > > > > > > > > there was to have a worker which batches up all the updates and avoids any > > > > > > > > stalls in bad places. > > > > > > > > > > > > > > I'm not too worried about fbcon not being able to render faster than > > > > > > > vblank. OTOH it is a pretty big problem for x11 > > > > > > > > > > > > That's why we'd let the worker get ahead at most one dirtyfb. We do > > > > > > the same with fbcon, which trivially can get ahead of vblank otherwise > > > > > > (if sometimes flushes each character, so you have to pile them up into > > > > > > a single update if that's still pending). > > > > > > > > > > > > > > Since this is for frontbuffer rendering userspace only we can probably get > > > > > > > > away with assuming there's only a single fb, so the implementation becomes > > > > > > > > pretty simple: > > > > > > > > > > > > > > > > - 1 worker, and we keep track of a single pending fb > > > > > > > > - if there's already a dirty fb pending on a different fb, we stall for > > > > > > > > the worker to start processing that one already (i.e. the fb we track is > > > > > > > > reset to NULL) > > > > > > > > - if it's pending on the same fb we just toss away all the updates and go > > > > > > > > with a full update, since merging the clip rects is too much work :-) I > > > > > > > > think there's helpers so you could be slightly more clever and just have > > > > > > > > an overall bounding box > > > > > > > > > > > > > > This doesn't really fix the problem, you still end up delaying sending > > > > > > > the next back-buffer to mesa > > > > > > > > > > > > With this the dirtyfb would never block. Also glorious frontbuffer > > > > > > tracking corruption is possible, but that's not the kernel's problem. > > > > > > So how would anything get held up in userspace. > > > > > > > > > > the part about stalling if a dirtyfb is pending was what I was worried > > > > > about.. but I suppose you meant the worker stalling, rather than > > > > > userspace stalling (where I had interpreted it the other way around). > > > > > As soon as userspace needs to stall, you're losing again. > > > > > > > > Nah, I did mean userspace stalling, so we can't pile up unlimited amounts > > > > of dirtyfb request in the kernel. > > > > > > > > But also I never expect userspace that uses dirtyfb to actually hit this > > > > stall point (otherwise we'd need to look at this again). It would really > > > > be only there as defense against abuse. > > > > > > I don't believe modesetting ddx throttles dirtyfb, it (indirectly) > > > calls this from it's BlockHandler.. so if you do end up blocking after > > > the N'th dirtyfb, you are still going to end up stalling for vblank, > > > you are just deferring that for a frame or two.. > > > > Nope, that's not what I mean. > > > > By default we pile up the updates, so you _never_ stall. The worker then > > takes the entire update every time it runs and batches them up. > > > > We _only_ stall when we get a dirtyfb with a different fb. Because that's > > much harder to pile up, plus frontbuffer rendering userspace uses a single > > fb across all screens anyway. > > > > So really I don't expect X to ever stall in it's BlockHandler with this. > > ok, sorry, I missed the "different fb" part.. > > but I could see a userspace that uses multiple fb's wanting to do > front buffer rendering.. although they are probably only going to do > it on a single display at a time, so maybe that is a bit of an edge > case Yeah at that point we either tell them "pls dont" (if it's new userspace). Or we quietly sigh and make the stall avoidance/pile up logic a bit more fancy to take another case into account. > > > The thing is, for a push style panel, you don't necessarily have to > > > wait for "vblank" (because "vblank" isn't necessarily a real thing), > > > so in that scenario dirtyfb could in theory be fast. What you want to > > > do is fundamentally different for push vs pull style displays. > > > > Yeah, but we'd only stall if userspace does a modeset (which means > > different fb) and at that point you'll stall anyway a bit. So shouldn't > > hurt. > > > > Well you can do frontbuffer rendering even with atomic ioctl. Just don't > > use dirtyfb. > > > > But also you really shouldn't use frontbuffer rendering right now, since > > we don't have the interfaces right now to tell userspace whether it's > > cmd-mode or something else and what kind of corruption (if any) to expect > > when they do that. > > Compressed formats and front-buffer rendering don't really work out in > a pleasant way.. minigbm has a usage flag to indicate that the surface > will be used for front-buffer rendering (and it is a thing we should > probably port to real gbm). I think this aspect of it is better > solved in userspace. Yeah, I'm thinking more of cmd/scanout panels and stuff like that. Altough even with cmd-mode we currently reserve the right to rescan the buffer whenever we feel like in the kernel, so right now you can't rely on anything to avoid corruption for frontbuffer rendering. > > > > > > > > But we could re-work drm_framebuffer_funcs::dirty to operate on a > > > > > > > per-crtc basis and hoist the loop and check if dirtyfb is needed out > > > > > > > of drm_atomic_helper_dirtyfb() > > > > > > > > > > > > That's still using information that userspace doesn't have, which is a > > > > > > bit irky. We might as well go with your thing here then. > > > > > > > > > > arguably, this is something we should expose to userspace.. for DSI > > > > > command-mode panels, you probably want to make a different decision > > > > > with regard to how many buffers in your flip-chain.. > > > > > > > > > > Possibly we should add/remove the fb_damage_clips property depending > > > > > on the display type (ie. video/pull vs cmd/push mode)? > > > > > > > > I'm not sure whether atomic actually needs this exposed: > > > > - clients will do full flips for every frame anyway, I've not heard of > > > > anyone seriously doing frontbuffer rendering. > > > > > > Frontbuffer rendering is actually a thing, for ex. to reduce latency > > > for stylus (android and CrOS do this.. fortunately AFAICT CrOS never > > > uses the dirtyfb ioctl.. but as soon as someone has the nice idea to > > > add that we'd be running into the same problem) > > > > > > Possibly one idea is to treat dirty-clip updates similarly to cursor > > > updates, and let the driver accumulate the updates and then wait until > > > vblank to apply them > > > > Yeah that's what I mean. Except implemented cheaper. fbcon code already > > does it. I think we're seriously talking past each another. > > Hmm, well 'state->async_update = true' is a pretty cheap implementation.. It's also very broken thus far :-/ It's broken enough that I've essentially given up trying to make cursor work reasonably well across drivers, much less extend this to plane updates in general, or more. One can dream still, but for legacy ioctl or functionality like fbcon it's much easier to hack over the problem with some kernel threads before you call drm_atomic_commit. Cheers, Daniel > > BR, > -R > > > -Daniel > > > > > > > > BR, > > > -R > > > > > > > - transporting the cliprects around and then tossing them if the driver > > > > doesn't need them in their flip is probably not a measurable win > > > > > > > > But yeah if I'm wrong and we have a need here and it's useful, then > > > > exposing this to userspace should be done. Meanwhile I think a "offload to > > > > worker like fbcon" trick for this legacy interface is probabyl the best > > > > option. Plus it will fix things not just for the case where you don't need > > > > dirty uploading, it will also fix things for the case where you _do_ need > > > > dirty uploading (since right now we stall in a few bad places for that I > > > > think). > > > > -Daniel > > > > > > > > > > > > > > BR, > > > > > -R > > > > > > > > > > > -Daniel > > > > > > > > > > > > > BR, > > > > > > > -R > > > > > > > > > > > > > > > > > > > > > > > Could probably steal most of the implementation. > > > > > > > > > > > > > > > > This approach here feels a tad too much in the hacky area ... > > > > > > > > > > > > > > > > Thoughts? > > > > > > > > -Daniel > > > > > > > > > > > > > > > > > --- > > > > > > > > > drivers/gpu/drm/drm_damage_helper.c | 8 ++++++++ > > > > > > > > > include/drm/drm_modeset_helper_vtables.h | 14 ++++++++++++++ > > > > > > > > > 2 files changed, 22 insertions(+) > > > > > > > > > > > > > > > > > > diff --git a/drivers/gpu/drm/drm_damage_helper.c b/drivers/gpu/drm/drm_damage_helper.c > > > > > > > > > index 3a4126dc2520..a0bed1a2c2dc 100644 > > > > > > > > > --- a/drivers/gpu/drm/drm_damage_helper.c > > > > > > > > > +++ b/drivers/gpu/drm/drm_damage_helper.c > > > > > > > > > @@ -211,6 +211,7 @@ int drm_atomic_helper_dirtyfb(struct drm_framebuffer *fb, > > > > > > > > > retry: > > > > > > > > > drm_for_each_plane(plane, fb->dev) { > > > > > > > > > struct drm_plane_state *plane_state; > > > > > > > > > + struct drm_crtc *crtc; > > > > > > > > > > > > > > > > > > ret = drm_modeset_lock(&plane->mutex, state->acquire_ctx); > > > > > > > > > if (ret) > > > > > > > > > @@ -221,6 +222,13 @@ int drm_atomic_helper_dirtyfb(struct drm_framebuffer *fb, > > > > > > > > > continue; > > > > > > > > > } > > > > > > > > > > > > > > > > > > + crtc = plane->state->crtc; > > > > > > > > > + if (crtc->helper_private->needs_dirtyfb && > > > > > > > > > + !crtc->helper_private->needs_dirtyfb(crtc)) { > > > > > > > > > + drm_modeset_unlock(&plane->mutex); > > > > > > > > > + continue; > > > > > > > > > + } > > > > > > > > > + > > > > > > > > > plane_state = drm_atomic_get_plane_state(state, plane); > > > > > > > > > if (IS_ERR(plane_state)) { > > > > > > > > > ret = PTR_ERR(plane_state); > > > > > > > > > diff --git a/include/drm/drm_modeset_helper_vtables.h b/include/drm/drm_modeset_helper_vtables.h > > > > > > > > > index eb706342861d..afa8ec5754e7 100644 > > > > > > > > > --- a/include/drm/drm_modeset_helper_vtables.h > > > > > > > > > +++ b/include/drm/drm_modeset_helper_vtables.h > > > > > > > > > @@ -487,6 +487,20 @@ struct drm_crtc_helper_funcs { > > > > > > > > > bool in_vblank_irq, int *vpos, int *hpos, > > > > > > > > > ktime_t *stime, ktime_t *etime, > > > > > > > > > const struct drm_display_mode *mode); > > > > > > > > > + > > > > > > > > > + /** > > > > > > > > > + * @needs_dirtyfb > > > > > > > > > + * > > > > > > > > > + * Optional callback used by damage helpers to determine if fb_damage_clips > > > > > > > > > + * update is needed. > > > > > > > > > + * > > > > > > > > > + * Returns: > > > > > > > > > + * > > > > > > > > > + * True if fb_damage_clips update is needed to handle DIRTYFB, False > > > > > > > > > + * otherwise. If this callback is not implemented, then True is > > > > > > > > > + * assumed. > > > > > > > > > + */ > > > > > > > > > + bool (*needs_dirtyfb)(struct drm_crtc *crtc); > > > > > > > > > }; > > > > > > > > > > > > > > > > > > /** > > > > > > > > > -- > > > > > > > > > 2.30.2 > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > Daniel Vetter > > > > > > > > Software Engineer, Intel Corporation > > > > > > > > http://blog.ffwll.ch > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Daniel Vetter > > > > > > Software Engineer, Intel Corporation > > > > > > http://blog.ffwll.ch > > > > > > > > -- > > > > Daniel Vetter > > > > Software Engineer, Intel Corporation > > > > http://blog.ffwll.ch > > > > -- > > Daniel Vetter > > Software Engineer, Intel Corporation > > http://blog.ffwll.ch -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch