Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp1291223imm; Tue, 5 Jun 2018 12:04:10 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJbrkTYKa0G9adtYVLqyQXW6EDYjM7nnIzq+9kz+YowXa5ddL9ubzSzN0N3acTc/l5zhw2d X-Received: by 2002:a17:902:3f81:: with SMTP id a1-v6mr27243690pld.29.1528225450885; Tue, 05 Jun 2018 12:04:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528225450; cv=none; d=google.com; s=arc-20160816; b=q2CHVUWAm76MlDskfD+hYIHjhHyLInVX9QN2m/8m2ZfnccqUR9OhXtfLBIQMvka+9+ LoeYFntHQeVW4oO8Di4mM2mGUzFoMkucrzNCNuwBzf/nLCuPZVtNfYd6Gdt2OI5iTCYH buaeFFSMVLPISBCyP3FlE20/BoJt9F8eJuPsAHMIkvlC4/NFNu24RRjAltddHeLlMXTA vEDDmfSGUf3y2e4uG7y3G3Vu/LE/cvYbE16snHX7jC0ZuBXKMR3YsS7Vw3pzDU0MpZQl S4P24VQkYdm3O8VBTfZ7VuJaM05WaiPeX8PsfWN2PEFITbK0XvF3pnBYIKwOlGZWTCsT TNrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=AQzcITXL2hOGTFUpwIOjL+JIa2zeAE+YJkiHygb/W0A=; b=rvk5/nkRZzTpqk06K9Kw0Sg1CG3UfI7g6H2bhABRHN3CUGztJlGKeD8N7MSSqhPR8T yassMr88DK6laEbxVPwEq3pnqHElP1VnG3zQu+vRuqJeobQQkqjdqNCWZP/Qz3584ruW ORs92Hoy2wlonCRW/GyBBefJ2MwzJfGLoPwzT7G/rdWEXtIMcPQbKZq05PQD1JDbBaUj MZPTGdtDnxdYXZaiEMKYcVDrQPxaPhbF+28WIh/QCO16lurbvbTED7xRLj4T+qZXQsAG LnUP7I5cdjOjhmwO7VvxI6YWgo+0bunKOSTsRZWyIvexP+b8lQxQgYPWjTkg/6rHW3Vs t+rw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 34-v6si50143103plz.66.2018.06.05.12.03.56; Tue, 05 Jun 2018 12:04:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752142AbeFETDH (ORCPT + 99 others); Tue, 5 Jun 2018 15:03:07 -0400 Received: from anholt.net ([50.246.234.109]:40558 "EHLO anholt.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751823AbeFETDG (ORCPT ); Tue, 5 Jun 2018 15:03:06 -0400 Received: from localhost (localhost [127.0.0.1]) by anholt.net (Postfix) with ESMTP id C907B10A188C; Tue, 5 Jun 2018 12:03:05 -0700 (PDT) X-Virus-Scanned: Debian amavisd-new at anholt.net Received: from anholt.net ([127.0.0.1]) by localhost (kingsolver.anholt.net [127.0.0.1]) (amavisd-new, port 10024) with LMTP id JFMuZuk8dCif; Tue, 5 Jun 2018 12:03:03 -0700 (PDT) Received: from eliezer.anholt.net (localhost [127.0.0.1]) by anholt.net (Postfix) with ESMTP id C83A510A12B2; Tue, 5 Jun 2018 12:03:03 -0700 (PDT) Received: by eliezer.anholt.net (Postfix, from userid 1000) id 049792FE2D7C; Tue, 5 Jun 2018 12:03:03 -0700 (PDT) From: Eric Anholt To: dri-devel@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, Lucas Stach , amd-gfx@lists.freedesktop.org, Eric Anholt Subject: [PATCH 1/3] drm/v3d: Take a lock across GPU scheduler job creation and queuing. Date: Tue, 5 Jun 2018 12:03:00 -0700 Message-Id: <20180605190302.18279-1-eric@anholt.net> X-Mailer: git-send-email 2.17.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Between creation and queueing of a job, you need to prevent any other job from being created and queued. Otherwise the scheduler's fences may be signaled out of seqno order. Signed-off-by: Eric Anholt Fixes: 57692c94dcbe ("drm/v3d: Introduce a new DRM driver for Broadcom V3D V3.x+") --- ccing amd-gfx due to interaction of this series with the scheduler. drivers/gpu/drm/v3d/v3d_drv.h | 5 +++++ drivers/gpu/drm/v3d/v3d_gem.c | 11 +++++++++-- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index a043ac3aae98..26005abd9c5d 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -85,6 +85,11 @@ struct v3d_dev { */ struct mutex reset_lock; + /* Lock taken when creating and pushing the GPU scheduler + * jobs, to keep the sched-fence seqnos in order. + */ + struct mutex sched_lock; + struct { u32 num_allocated; u32 pages_allocated; diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index b513f9189caf..9ea83bdb9a30 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -550,13 +550,16 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, if (ret) goto fail; + mutex_lock(&v3d->sched_lock); if (exec->bin.start != exec->bin.end) { ret = drm_sched_job_init(&exec->bin.base, &v3d->queue[V3D_BIN].sched, &v3d_priv->sched_entity[V3D_BIN], v3d_priv); - if (ret) + if (ret) { + mutex_unlock(&v3d->sched_lock); goto fail_unreserve; + } exec->bin_done_fence = dma_fence_get(&exec->bin.base.s_fence->finished); @@ -570,12 +573,15 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, &v3d->queue[V3D_RENDER].sched, &v3d_priv->sched_entity[V3D_RENDER], v3d_priv); - if (ret) + if (ret) { + mutex_unlock(&v3d->sched_lock); goto fail_unreserve; + } kref_get(&exec->refcount); /* put by scheduler job completion */ drm_sched_entity_push_job(&exec->render.base, &v3d_priv->sched_entity[V3D_RENDER]); + mutex_unlock(&v3d->sched_lock); v3d_attach_object_fences(exec); @@ -615,6 +621,7 @@ v3d_gem_init(struct drm_device *dev) spin_lock_init(&v3d->job_lock); mutex_init(&v3d->bo_lock); mutex_init(&v3d->reset_lock); + mutex_init(&v3d->sched_lock); /* Note: We don't allocate address 0. Various bits of HW * treat 0 as special, such as the occlusion query counters -- 2.17.0