Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp539742imm; Wed, 6 Jun 2018 01:47:28 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLdH2nL65dd2m1xTzrmgkiBDEKS6161TLXFZAybN93UioPsxCE1WAnE2dgYetUcKqEQYgQK X-Received: by 2002:a17:902:8d8e:: with SMTP id v14-v6mr2324564plo.387.1528274848757; Wed, 06 Jun 2018 01:47:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528274848; cv=none; d=google.com; s=arc-20160816; b=ItuuVdX9D6rPIn815/umP3VsqMceoy6OreY6VWC8iL63MmQEhoavWflemYOwK7yuUd gs/soDsJ83U47MwL4P2U/MqTSq1cTMC1TFVC4PosCa8jm7/lg+B3ACaVmsk55Wvi4XmU wDs10XRx9hvTGhuHyYo9s1k3uuY+915vcNsHgVQ2zqpCsxNVEWQ6c13BHz978BOofNh7 lM7k45WzSsEJ4kn8ho0kNqM+DJJ6AKqvyug4U3WGDZG7h0jpX8ypqkXW2wTLPvgRgf8a cbhKoxTLo9ph4BTkEWGSUC3FWM1awOci4x7cu96Dkx+yo0950W97eV9ub3K4TTap8Q9c GVcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:date:cc:to:from:subject:message-id :arc-authentication-results; bh=eFpEQNDCv9xcYP6047K1QX5Xn9fDpMwGgwuAhGc020g=; b=gRTGa9QtcyIUrJt5yB9Imk+oyiFJLFlPXJIIlNI4xCXSMIti8wlbYmqzAN0hxgqbSg qT8GoSUh+UryZN1B0Rtc/ZlGk6La/CTGqdGOtZl8Oeylq/JR4v/T+F3jLx9yIzS1xuKW buqd4jcHZRGDzcBnmOh799nGHYRzHlbPxOko2F6ha2feYKCyewVNIyQpUV+fiJVntyfv Z5YcIClH1nYShVA8ujJ3YXW2MAuhh5jhOWVSgWTTqDtIuY6Z16x35X3MY8BHwyB1fTR5 G0k8+GeeXR08lmduV9b+lGgGPqwDaZbfH3cgoiKBoJFIqYL2fjHp8xtlpsc7t/tDJ0Z3 A3EA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 33-v6si51838019plu.385.2018.06.06.01.47.13; Wed, 06 Jun 2018 01:47:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932528AbeFFIqo (ORCPT + 99 others); Wed, 6 Jun 2018 04:46:44 -0400 Received: from metis.ext.pengutronix.de ([85.220.165.71]:49107 "EHLO metis.ext.pengutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932423AbeFFIqm (ORCPT ); Wed, 6 Jun 2018 04:46:42 -0400 Received: from weser.hi.pengutronix.de ([2001:67c:670:100:fa0f:41ff:fe58:4010]) by metis.ext.pengutronix.de with esmtp (Exim 4.89) (envelope-from ) id 1fQU5X-0000k0-DN; Wed, 06 Jun 2018 10:46:39 +0200 Message-ID: <1528274797.26063.6.camel@pengutronix.de> Subject: Re: [PATCH 1/3] drm/v3d: Take a lock across GPU scheduler job creation and queuing. From: Lucas Stach To: Eric Anholt , dri-devel@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, amd-gfx@lists.freedesktop.org Date: Wed, 06 Jun 2018 10:46:37 +0200 In-Reply-To: <20180605190302.18279-1-eric@anholt.net> References: <20180605190302.18279-1-eric@anholt.net> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.22.6-1+deb9u1 Mime-Version: 1.0 Content-Transfer-Encoding: 8bit X-SA-Exim-Connect-IP: 2001:67c:670:100:fa0f:41ff:fe58:4010 X-SA-Exim-Mail-From: l.stach@pengutronix.de X-SA-Exim-Scanned: No (on metis.ext.pengutronix.de); SAEximRunCond expanded to false X-PTX-Original-Recipient: linux-kernel@vger.kernel.org Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Am Dienstag, den 05.06.2018, 12:03 -0700 schrieb Eric Anholt: > Between creation and queueing of a job, you need to prevent any other > job from being created and queued.  Otherwise the scheduler's fences > may be signaled out of seqno order. > > > Signed-off-by: Eric Anholt > Fixes: 57692c94dcbe ("drm/v3d: Introduce a new DRM driver for Broadcom V3D V3.x+") > --- > > ccing amd-gfx due to interaction of this series with the scheduler. > >  drivers/gpu/drm/v3d/v3d_drv.h |  5 +++++ >  drivers/gpu/drm/v3d/v3d_gem.c | 11 +++++++++-- >  2 files changed, 14 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h > index a043ac3aae98..26005abd9c5d 100644 > --- a/drivers/gpu/drm/v3d/v3d_drv.h > +++ b/drivers/gpu/drm/v3d/v3d_drv.h > @@ -85,6 +85,11 @@ struct v3d_dev { > >    */ > >   struct mutex reset_lock; >   > > + /* Lock taken when creating and pushing the GPU scheduler > > +  * jobs, to keep the sched-fence seqnos in order. > > +  */ > > + struct mutex sched_lock; > + > >   struct { > >   u32 num_allocated; > >   u32 pages_allocated; > diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c > index b513f9189caf..9ea83bdb9a30 100644 > --- a/drivers/gpu/drm/v3d/v3d_gem.c > +++ b/drivers/gpu/drm/v3d/v3d_gem.c > @@ -550,13 +550,16 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, > >   if (ret) > >   goto fail; >   > > + mutex_lock(&v3d->sched_lock); > >   if (exec->bin.start != exec->bin.end) { > >   ret = drm_sched_job_init(&exec->bin.base, > >    &v3d->queue[V3D_BIN].sched, > >    &v3d_priv->sched_entity[V3D_BIN], > >    v3d_priv); > > - if (ret) > > + if (ret) { > > + mutex_unlock(&v3d->sched_lock); >   goto fail_unreserve; I don't see any path where you would go to fail_unreserve with the mutex not yet locked, so you could just fold the mutex_unlock into this error path for a bit less code duplication. Otherwise this looks fine. Regards, Lucas > + } >   > >   exec->bin_done_fence = > >   dma_fence_get(&exec->bin.base.s_fence->finished); > @@ -570,12 +573,15 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, > >    &v3d->queue[V3D_RENDER].sched, > >    &v3d_priv->sched_entity[V3D_RENDER], > >    v3d_priv); > > - if (ret) > > + if (ret) { > > + mutex_unlock(&v3d->sched_lock); > >   goto fail_unreserve; > > + } >   > >   kref_get(&exec->refcount); /* put by scheduler job completion */ > >   drm_sched_entity_push_job(&exec->render.base, > >     &v3d_priv->sched_entity[V3D_RENDER]); > > + mutex_unlock(&v3d->sched_lock); >   > >   v3d_attach_object_fences(exec); >   > @@ -615,6 +621,7 @@ v3d_gem_init(struct drm_device *dev) > >   spin_lock_init(&v3d->job_lock); > >   mutex_init(&v3d->bo_lock); > >   mutex_init(&v3d->reset_lock); > > + mutex_init(&v3d->sched_lock); >   > >   /* Note: We don't allocate address 0.  Various bits of HW > >    * treat 0 as special, such as the occlusion query counters