Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp113150pxb; Tue, 21 Sep 2021 20:30:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx+J86pStn1b5Ed8qEXt/z0gcr2mgSrR3fyAlP0V7TA6cLf/3SC+kDw2bQcfw9tN5S3HIcw X-Received: by 2002:a17:906:680c:: with SMTP id k12mr39462176ejr.85.1632281432166; Tue, 21 Sep 2021 20:30:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632281432; cv=none; d=google.com; s=arc-20160816; b=KK57aQOag0ZJ34u29e7XAuEIQsp4xo0o1O+9YWeKRPcyIssi9/NRMFseLmtvtSh9s4 xDL4leTvAZJyaamPM5qGiblZ9P/5BLQY3wgNnDRyXNxwiBQmhtWHWrYiThwyCuOemzyI P2aHL7PrQiEhG/EJvpHWjbeGYYP2yqYZICslb3jwFPN17bvU5foIjuOF3CKLzqP6FnAj RFnXD1MWrm+Jx69aDRdEfBgb/mrCGoBB4eeXFs1Rv5wjjbb6AHHzySnO+KF3InXUQfqf +Kpr0vlDXgbKq5pWoQTAR0+dZ84W34o2bubbU9VaxwbUBEIRFWx/ojzSX3etkQ3FXP5U x1wA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=bNzxzcGkzoMof7C2440I1dX8wC7/bp4LkDYjA0YjJhE=; b=lPUbY1L+/mY10sszbke57N9vVJ/luNWAe/3ED5cxWYkrzBtmw2KFGI4B49touFwE/s i9q27vE13HZLxZsQOFCRAHme+ecR9uzISLIwxbLFAoJs+WLQ7UUVPVbGi2K1Gb6F423h 8g5M1C6TSaLEgflxU5nDDYmlOZCEk9k5hgmrt8tVFh41H148uPuNUZDW5rcP52T3FD26 cVTME1GodiOlgImTIQ6LbokpbPczudB1/TVYmY0qLGhCmBLSmuHp9X1NcZDbhSt4ud0c 0d8aLYJtU4NMNcr03DDE3HDGhh+yhjciREqpstG14kYam4T5mvXZ/zFC86TOkhTzBlCY Sf8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=MPdSqETR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i10si1544450ejd.99.2021.09.21.20.30.08; Tue, 21 Sep 2021 20:30:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=MPdSqETR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231150AbhIVD3l (ORCPT + 99 others); Tue, 21 Sep 2021 23:29:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230054AbhIVD3l (ORCPT ); Tue, 21 Sep 2021 23:29:41 -0400 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF570C061574; Tue, 21 Sep 2021 20:28:11 -0700 (PDT) Received: by mail-wr1-x42b.google.com with SMTP id t8so2583880wrq.4; Tue, 21 Sep 2021 20:28:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=bNzxzcGkzoMof7C2440I1dX8wC7/bp4LkDYjA0YjJhE=; b=MPdSqETRs1lV1JsUQ/VMZeO8Pdo4BP5IXiD2QMo4yK6HIdqgqh8qEzzNrAv+4yUeLU hz/pALicHy0SBrhkH8a/nfirnWfi29sArw7Z8/60fp5ZRWOHtZbcR6LtVwRnTm2nul6e l3zt/aNp95FQmf3uw9yD7WDb0IBe52T9rBFqrs9bEsP6nvl1jidyYEl+tMuTHL+7r8Na WcdZnsLfMA95yhqOIoNGCVpkJ7x3QsyAtDfZMHvnAUA8hL2vm9ITuHcbx0u29WOBVtBg pz74FUd7t5gFMfdoLDjckM3edaDe5E+joDM0CCiAKGQZwYpLFF2iFAnK6vlPwldd8CfU 2Aaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=bNzxzcGkzoMof7C2440I1dX8wC7/bp4LkDYjA0YjJhE=; b=ptrbQ/UyXe1UfcR16oJp5ax27caaWy3qka+lXeUS3ALo90B4ytBa40vCnCPt5oxAhc DAMqSOkSZQSJ0erOMGfhCGf/GYXObGqnC4RR6PnxDnWZxV5p260zWLEMzvgiFMBRQY2d bK1r0lFD3wGhLtCo/E0K7bPTVBSqEPPnbT4pDK1dqdjZzWhgQt8nNkVOlH6/anUm13Dp MJskjg6Y4JAW0PPpGDvIy5ErXbx3OdOPZrpdpxhNf/jGRwYQrALEZidJh4Xt/WvAxQy4 ZpT2X8NZcX4DUNXW4cQnLOkt5dROIc2I3t4wFmm5onOFCllV9dgF6iu5flVDoGrTmVlS ydCQ== X-Gm-Message-State: AOAM5301/aL5BTAdd+O1RiU4RB4mHFNHnloR4csXBJgb9KNYJsXiPfnz pFVbK+IU2CarxC+iHjsvzou/i54nrp5XWKNEJSI= X-Received: by 2002:a05:600c:1d16:: with SMTP id l22mr6332278wms.101.1632281290205; Tue, 21 Sep 2021 20:28:10 -0700 (PDT) MIME-Version: 1.0 References: <20210903184806.1680887-1-robdclark@gmail.com> <20210903184806.1680887-5-robdclark@gmail.com> <101628ea-23c9-4bc0-5abc-a5b71b0fccc1@amd.com> In-Reply-To: From: Rob Clark Date: Tue, 21 Sep 2021 20:32:41 -0700 Message-ID: Subject: Re: [PATCH v3 4/9] drm/scheduler: Add fence deadline support To: Andrey Grodzovsky Cc: dri-devel , "moderated list:DMA BUFFER SHARING FRAMEWORK" , Daniel Vetter , =?UTF-8?Q?Christian_K=C3=B6nig?= , =?UTF-8?Q?Michel_D=C3=A4nzer?= , Pekka Paalanen , Rob Clark , David Airlie , Sumit Semwal , =?UTF-8?Q?Christian_K=C3=B6nig?= , Tian Tao , Steven Price , Melissa Wen , Luben Tuikov , Boris Brezillon , Jack Zhang , open list , "open list:DMA BUFFER SHARING FRAMEWORK" Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Sep 21, 2021 at 7:18 PM Andrey Grodzovsky wrote: > > > On 2021-09-21 4:47 p.m., Rob Clark wrote: > > On Tue, Sep 21, 2021 at 1:09 PM Andrey Grodzovsky > > wrote: > >> On 2021-09-03 2:47 p.m., Rob Clark wrote: > >> > >>> From: Rob Clark > >>> > >>> As the finished fence is the one that is exposed to userspace, and > >>> therefore the one that other operations, like atomic update, would > >>> block on, we need to propagate the deadline from from the finished > >>> fence to the actual hw fence. > >>> > >>> v2: Split into drm_sched_fence_set_parent() (ckoenig) > >>> > >>> Signed-off-by: Rob Clark > >>> --- > >>> drivers/gpu/drm/scheduler/sched_fence.c | 34 +++++++++++++++++++++++++ > >>> drivers/gpu/drm/scheduler/sched_main.c | 2 +- > >>> include/drm/gpu_scheduler.h | 8 ++++++ > >>> 3 files changed, 43 insertions(+), 1 deletion(-) > >>> > >>> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c > >>> index bcea035cf4c6..4fc41a71d1c7 100644 > >>> --- a/drivers/gpu/drm/scheduler/sched_fence.c > >>> +++ b/drivers/gpu/drm/scheduler/sched_fence.c > >>> @@ -128,6 +128,30 @@ static void drm_sched_fence_release_finished(struct dma_fence *f) > >>> dma_fence_put(&fence->scheduled); > >>> } > >>> > >>> +static void drm_sched_fence_set_deadline_finished(struct dma_fence *f, > >>> + ktime_t deadline) > >>> +{ > >>> + struct drm_sched_fence *fence = to_drm_sched_fence(f); > >>> + unsigned long flags; > >>> + > >>> + spin_lock_irqsave(&fence->lock, flags); > >>> + > >>> + /* If we already have an earlier deadline, keep it: */ > >>> + if (test_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) && > >>> + ktime_before(fence->deadline, deadline)) { > >>> + spin_unlock_irqrestore(&fence->lock, flags); > >>> + return; > >>> + } > >>> + > >>> + fence->deadline = deadline; > >>> + set_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags); > >>> + > >>> + spin_unlock_irqrestore(&fence->lock, flags); > >>> + > >>> + if (fence->parent) > >>> + dma_fence_set_deadline(fence->parent, deadline); > >>> +} > >>> + > >>> static const struct dma_fence_ops drm_sched_fence_ops_scheduled = { > >>> .get_driver_name = drm_sched_fence_get_driver_name, > >>> .get_timeline_name = drm_sched_fence_get_timeline_name, > >>> @@ -138,6 +162,7 @@ static const struct dma_fence_ops drm_sched_fence_ops_finished = { > >>> .get_driver_name = drm_sched_fence_get_driver_name, > >>> .get_timeline_name = drm_sched_fence_get_timeline_name, > >>> .release = drm_sched_fence_release_finished, > >>> + .set_deadline = drm_sched_fence_set_deadline_finished, > >>> }; > >>> > >>> struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) > >>> @@ -152,6 +177,15 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) > >>> } > >>> EXPORT_SYMBOL(to_drm_sched_fence); > >>> > >>> +void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence, > >>> + struct dma_fence *fence) > >>> +{ > >>> + s_fence->parent = dma_fence_get(fence); > >>> + if (test_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT, > >>> + &s_fence->finished.flags)) > >>> + dma_fence_set_deadline(fence, s_fence->deadline); > >> > >> I believe above you should pass be s_fence->finished to > >> dma_fence_set_deadline > >> instead it fence which is the HW fence itself. > > Hmm, unless this has changed recently with some patches I don't have, > > s_fence->parent is the one signalled by hw, so it is the one we want > > to set the deadline on > > > > BR, > > -R > > > No it didn't change. But then when exactly will > drm_sched_fence_set_deadline_finished > execute such that fence->parent != NULL ? In other words, I am not clear > how propagation > happens otherwise - if dma_fence_set_deadline is called with the HW > fence then the assumption > here is that driver provided driver specific > dma_fence_ops.dma_fence_set_deadline callback executes > but I was under impression that drm_sched_fence_set_deadline_finished is > the one that propagates > the deadline to the HW fence's callback and for it to execute > dma_fence_set_deadline needs to be called > with s_fence->finished. Assuming I didn't screw up drm/msm conversion to scheduler, &s_fence->finished is the one that will be returned to userspace.. and later passed back to kernel for atomic commit (or to the compositor). So it is the one that fence->set_deadline() will be called on. But s_fence->parent is the actual hw fence that needs to know about the deadline. Depending on whether or not the job has been written into hw ringbuffer or not, there are two cases: 1) not scheduled yet, s_fence will store the deadline and propagate it later once s_fence->parent is known 2) already scheduled, in which case s_fence->finished.set_deadline will propagate it directly to the real fence BR, -R > Andrey > > > > > > >> Andrey > >> > >> > >>> +} > >>> + > >>> struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity, > >>> void *owner) > >>> { > >>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c > >>> index 595e47ff7d06..27bf0ac0625f 100644 > >>> --- a/drivers/gpu/drm/scheduler/sched_main.c > >>> +++ b/drivers/gpu/drm/scheduler/sched_main.c > >>> @@ -978,7 +978,7 @@ static int drm_sched_main(void *param) > >>> drm_sched_fence_scheduled(s_fence); > >>> > >>> if (!IS_ERR_OR_NULL(fence)) { > >>> - s_fence->parent = dma_fence_get(fence); > >>> + drm_sched_fence_set_parent(s_fence, fence); > >>> r = dma_fence_add_callback(fence, &sched_job->cb, > >>> drm_sched_job_done_cb); > >>> if (r == -ENOENT) > >>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h > >>> index 7f77a455722c..158ddd662469 100644 > >>> --- a/include/drm/gpu_scheduler.h > >>> +++ b/include/drm/gpu_scheduler.h > >>> @@ -238,6 +238,12 @@ struct drm_sched_fence { > >>> */ > >>> struct dma_fence finished; > >>> > >>> + /** > >>> + * @deadline: deadline set on &drm_sched_fence.finished which > >>> + * potentially needs to be propagated to &drm_sched_fence.parent > >>> + */ > >>> + ktime_t deadline; > >>> + > >>> /** > >>> * @parent: the fence returned by &drm_sched_backend_ops.run_job > >>> * when scheduling the job on hardware. We signal the > >>> @@ -505,6 +511,8 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, > >>> enum drm_sched_priority priority); > >>> bool drm_sched_entity_is_ready(struct drm_sched_entity *entity); > >>> > >>> +void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence, > >>> + struct dma_fence *fence); > >>> struct drm_sched_fence *drm_sched_fence_alloc( > >>> struct drm_sched_entity *s_entity, void *owner); > >>> void drm_sched_fence_init(struct drm_sched_fence *fence,