Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp1254799imm; Wed, 6 Jun 2018 12:57:24 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKXkWdJru8Anj0vNvkv9b7mF4QmKUWQKQ3pZnFpACxeEJA+TvhA/1l27tt7JAgckl6kxxEV X-Received: by 2002:a17:902:4301:: with SMTP id i1-v6mr4620913pld.280.1528315044734; Wed, 06 Jun 2018 12:57:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528315044; cv=none; d=google.com; s=arc-20160816; b=JdRHZeYrg5qkkimcEkn9ue6afDytwGzRMIkKN2kKC5+dnsu/ieqHFdNCCHCeNFr47q ZodkS/NmkwvR0V7Qtnm2iIpYQ74JTjfoYFdwQbUQkx798yPVC2NZgF19bs25k0Hi5wxc pTXdKLD7Hs29odxTbBP5rK84B3or6nJ8K68DrM5MovL3gT+4/msPunALcy/M/IL07tXp s97RvMam+FQxeejwAAjZZFjXyfFFIEPF0bZ4IMbVqznwPm6m4BvHMo5QGr1VC+SJkuNG IjE8YTD5aiSh3n29dwSWJjTst+SdlmJa/GeH+PnEfQXJbfTlJape9w7T+NlQeJn0LNeu kGpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=fQuFI0oGuOrlganwFoVs+obk9geW7a4KKOo72KO6rJU=; b=NwO9DolNAJYnVKXf38QFYxi36+gBt9NQWvcWWpnKVHV3RYVdzWLqmIZ1Q7hNRAaTPK 7nnkMM6onwzUhJ3mFc9/CZwGqnbsX5GauyaB82sLWU/ToJmri32w94QjCbzWfk8gri4h 2qXckBJ6iBi175QeLPLrz/GOG/MRDoV00KpuXABDYLbK+53KskTTKfhMPxcyv49ld3ob B470N9RzB5wufNxU8M/H4QFggNfXeNjzgGEfMAsKET6y4fifiBTHG+CuVIEUPxiWN4ts HNv5XxqhH9/rZcloTGXj9QmUYTtrZslcc4rIt2oXH4r86mK8jhO+WWp07H74mVz+H8jD AiGg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r12-v6si2849637pfe.9.2018.06.06.12.57.10; Wed, 06 Jun 2018 12:57:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753990AbeFFRs4 (ORCPT + 99 others); Wed, 6 Jun 2018 13:48:56 -0400 Received: from anholt.net ([50.246.234.109]:49560 "EHLO anholt.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753895AbeFFRsx (ORCPT ); Wed, 6 Jun 2018 13:48:53 -0400 Received: from localhost (localhost [127.0.0.1]) by anholt.net (Postfix) with ESMTP id 387D410A1772; Wed, 6 Jun 2018 10:48:53 -0700 (PDT) X-Virus-Scanned: Debian amavisd-new at anholt.net Received: from anholt.net ([127.0.0.1]) by localhost (kingsolver.anholt.net [127.0.0.1]) (amavisd-new, port 10024) with LMTP id uM80fzjFcjtS; Wed, 6 Jun 2018 10:48:52 -0700 (PDT) Received: from eliezer.anholt.net (localhost [127.0.0.1]) by anholt.net (Postfix) with ESMTP id 0753E10A042F; Wed, 6 Jun 2018 10:48:52 -0700 (PDT) Received: by eliezer.anholt.net (Postfix, from userid 1000) id 7B9D22FE2D96; Wed, 6 Jun 2018 10:48:51 -0700 (PDT) From: Eric Anholt To: dri-devel@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, Lucas Stach , amd-gfx@lists.freedesktop.org, Eric Anholt Subject: [PATCH 1/3 v2] drm/v3d: Take a lock across GPU scheduler job creation and queuing. Date: Wed, 6 Jun 2018 10:48:51 -0700 Message-Id: <20180606174851.12433-1-eric@anholt.net> X-Mailer: git-send-email 2.17.0 In-Reply-To: <1528274797.26063.6.camel@pengutronix.de> References: <1528274797.26063.6.camel@pengutronix.de> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Between creation and queueing of a job, you need to prevent any other job from being created and queued. Otherwise the scheduler's fences may be signaled out of seqno order. v2: move mutex unlock to the error label. Signed-off-by: Eric Anholt Fixes: 57692c94dcbe ("drm/v3d: Introduce a new DRM driver for Broadcom V3D V3.x+") --- drivers/gpu/drm/v3d/v3d_drv.h | 5 +++++ drivers/gpu/drm/v3d/v3d_gem.c | 4 ++++ 2 files changed, 9 insertions(+) diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h index a043ac3aae98..26005abd9c5d 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.h +++ b/drivers/gpu/drm/v3d/v3d_drv.h @@ -85,6 +85,11 @@ struct v3d_dev { */ struct mutex reset_lock; + /* Lock taken when creating and pushing the GPU scheduler + * jobs, to keep the sched-fence seqnos in order. + */ + struct mutex sched_lock; + struct { u32 num_allocated; u32 pages_allocated; diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c index b513f9189caf..269fe16379c0 100644 --- a/drivers/gpu/drm/v3d/v3d_gem.c +++ b/drivers/gpu/drm/v3d/v3d_gem.c @@ -550,6 +550,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, if (ret) goto fail; + mutex_lock(&v3d->sched_lock); if (exec->bin.start != exec->bin.end) { ret = drm_sched_job_init(&exec->bin.base, &v3d->queue[V3D_BIN].sched, @@ -576,6 +577,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, kref_get(&exec->refcount); /* put by scheduler job completion */ drm_sched_entity_push_job(&exec->render.base, &v3d_priv->sched_entity[V3D_RENDER]); + mutex_unlock(&v3d->sched_lock); v3d_attach_object_fences(exec); @@ -594,6 +596,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, return 0; fail_unreserve: + mutex_unlock(&v3d->sched_lock); v3d_unlock_bo_reservations(dev, exec, &acquire_ctx); fail: v3d_exec_put(exec); @@ -615,6 +618,7 @@ v3d_gem_init(struct drm_device *dev) spin_lock_init(&v3d->job_lock); mutex_init(&v3d->bo_lock); mutex_init(&v3d->reset_lock); + mutex_init(&v3d->sched_lock); /* Note: We don't allocate address 0. Various bits of HW * treat 0 as special, such as the occlusion query counters -- 2.17.0