Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp543727imm; Wed, 6 Jun 2018 01:52:54 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIFfTKeeQwlPWeEBhWocqQFQK/oZuUn1wJjlplRUp+h93QnokzE3aiwErsMK+eyCB8J7TWm X-Received: by 2002:a17:902:7b95:: with SMTP id w21-v6mr2329471pll.255.1528275173991; Wed, 06 Jun 2018 01:52:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528275173; cv=none; d=google.com; s=arc-20160816; b=AchZjfHGgocZ9zhKMUO6Zwy3Qin6MToYBmCuLxK3YPFYQ5xFbcfjeFPdTeGHOK+IK2 Kr85hAUqzg6WTqisCZZ/jwDzWYrt1ycNNbgafw8d7+ZXDBQaRMNSyDiYBBw5STu0g4lL 6XkmFb0YMW9GshXhtjQFe0nsVDpx3sZpZNFsVrb5IRRmwvRcNRmWVSk0LL0IS/J2msO4 +++hTwUt3Yk9gmhCfDRLyvCaNFSj+DzVnTK1YAhvkGiMO98Itvw+r1vGDCTxApv1KSgu jvs3EBFkcujEs4865amYT85l91Pf9fJtO1NJ0dmrcFRINeQm1mbrLIqfFZrSVdHcCh7+ B8JQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:reply-to:dkim-signature :arc-authentication-results; bh=hJ+5xiTgoIzXyELHJfOSOYmQnAAR4GF9/y/FCzytEj8=; b=VP00bXIoO19MlQx1TigPmMUDoMbAIoGI7hlRWH9ezt4R8w+ePTxcbjW3nHG02WAY3D 2CUIJQ8VeehoHuwrgE2ZFFdElrQnxLrXyaMKCyFZmvARF836lEkK68MGsTTi6NOjTQrj Ty95AVitZ84EELDVzYreSuFo0dyhBrwfXpTeeDg62Ka/oqkoOD58M3R2dGur2D+ggWR8 JobhP+1XJRtVjYKaYKBKPp53NbdbLoSKmUIdaOqJ1fo79Kk0DhLOHH2g2Af3gFktwhzv mPGZiHuulZ1lMGvmsF+jqbJuBR84OjxthWodEftTKgMZYOnQiSDRIbG+RMUlDklQ/pUe nxlA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ncJ2Jecw; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r189-v6si10953140pgr.500.2018.06.06.01.52.39; Wed, 06 Jun 2018 01:52:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ncJ2Jecw; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932315AbeFFIwL (ORCPT + 99 others); Wed, 6 Jun 2018 04:52:11 -0400 Received: from mail-wm0-f52.google.com ([74.125.82.52]:51529 "EHLO mail-wm0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932150AbeFFIwK (ORCPT ); Wed, 6 Jun 2018 04:52:10 -0400 Received: by mail-wm0-f52.google.com with SMTP id r15-v6so9956909wmc.1 for ; Wed, 06 Jun 2018 01:52:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=reply-to:subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding:content-language; bh=hJ+5xiTgoIzXyELHJfOSOYmQnAAR4GF9/y/FCzytEj8=; b=ncJ2JecwKJCZ9Z/Z7qFu2sDJR8XKgE1F75c1EEXF5OGchYL1zCtjZXF0RBVBP2iokA 77Z6WD9fafkj2QH7JHug+vjC9hY3ALInvawxFDpNU4nllUKMTRM+Im60ldIygJwNbrjD a2TqyHlhtJv/5CsaS4jceGv5G/558/Jo46iRphLueM1Rm0Y1jIzM09n/YaqvxRupKfxA j/iumrqH96KuI8ErnyRgvj2A3aL18oAp83dj6rKoA0oo+mubTcUTX8C/quxQ1k8PPu27 k6zzzqzvUtCzZhRC7c022pVIyS2xpjWtIr74oyLIzhaa/zX5PIlgTH96WUZPyi/q1Cyj d3yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:subject:to:cc:references:from :message-id:date:user-agent:mime-version:in-reply-to :content-transfer-encoding:content-language; bh=hJ+5xiTgoIzXyELHJfOSOYmQnAAR4GF9/y/FCzytEj8=; b=t5jzRsZxtKTSwS9Sy+aj/smiFL96BrLHw+2bJyHbLlJB8VnpaMPchxZaT1TBxOUp21 +lNjjlGSsyTD1/zKe7r8m5pjhiuYl432Mqz6p72CyD3q9/EW5ACpsP7ZKt/s4knqU8JZ Qz3UsgOJaoj441aYyQKYC7ErenU+2SioEyQ/QCnwstDwSA00fTPOw+If5VN3rv/4IdnO eW2uo1ltU5WiTyHzmCeyToOEHtQb/cNfPBS69SZJArBm2hj3vn+LThyzIzg9lOgCBBQP R707A26cC0V8W+G6Z2YHRY/pfTYM3hbD4HxUw8HBIis8crirTacst8XOt+LJUH34N9UN hrjw== X-Gm-Message-State: APt69E2gacX+aTpwLvwFWvG00TSquKlf/kGc+17WMpdoZrvR2E50jDZG uh4Q7XnkIT+sBCek0t1jYt5lgH8U X-Received: by 2002:a1c:c342:: with SMTP id t63-v6mr1173272wmf.123.1528275128908; Wed, 06 Jun 2018 01:52:08 -0700 (PDT) Received: from ?IPv6:2a02:908:1257:4460:1ab8:55c1:a639:6740? ([2a02:908:1257:4460:1ab8:55c1:a639:6740]) by smtp.gmail.com with ESMTPSA id m65-v6sm4866257wmd.1.2018.06.06.01.52.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Jun 2018 01:52:08 -0700 (PDT) Reply-To: christian.koenig@amd.com Subject: Re: [PATCH 1/3] drm/v3d: Take a lock across GPU scheduler job creation and queuing. To: Lucas Stach , Eric Anholt , dri-devel@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, amd-gfx@lists.freedesktop.org References: <20180605190302.18279-1-eric@anholt.net> <1528274797.26063.6.camel@pengutronix.de> From: =?UTF-8?Q?Christian_K=c3=b6nig?= Message-ID: <6462aaec-a5f3-3a78-2eb1-fb24faa68e42@gmail.com> Date: Wed, 6 Jun 2018 10:52:07 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: <1528274797.26063.6.camel@pengutronix.de> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Am 06.06.2018 um 10:46 schrieb Lucas Stach: > Am Dienstag, den 05.06.2018, 12:03 -0700 schrieb Eric Anholt: >> Between creation and queueing of a job, you need to prevent any other >> job from being created and queued.  Otherwise the scheduler's fences >> may be signaled out of seqno order. >> >>> Signed-off-by: Eric Anholt >> Fixes: 57692c94dcbe ("drm/v3d: Introduce a new DRM driver for Broadcom V3D V3.x+") >> --- >> >> ccing amd-gfx due to interaction of this series with the scheduler. >> >>  drivers/gpu/drm/v3d/v3d_drv.h |  5 +++++ >>  drivers/gpu/drm/v3d/v3d_gem.c | 11 +++++++++-- >>  2 files changed, 14 insertions(+), 2 deletions(-) >> >> diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h >> index a043ac3aae98..26005abd9c5d 100644 >> --- a/drivers/gpu/drm/v3d/v3d_drv.h >> +++ b/drivers/gpu/drm/v3d/v3d_drv.h >> @@ -85,6 +85,11 @@ struct v3d_dev { >>>    */ >>>   struct mutex reset_lock; >> >>> + /* Lock taken when creating and pushing the GPU scheduler >>> +  * jobs, to keep the sched-fence seqnos in order. >>> +  */ >>> + struct mutex sched_lock; >> + >>>   struct { >>>   u32 num_allocated; >>>   u32 pages_allocated; >> diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c >> index b513f9189caf..9ea83bdb9a30 100644 >> --- a/drivers/gpu/drm/v3d/v3d_gem.c >> +++ b/drivers/gpu/drm/v3d/v3d_gem.c >> @@ -550,13 +550,16 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, >>>   if (ret) >>>   goto fail; >> >>> + mutex_lock(&v3d->sched_lock); >>>   if (exec->bin.start != exec->bin.end) { >>>   ret = drm_sched_job_init(&exec->bin.base, >>>    &v3d->queue[V3D_BIN].sched, >>>    &v3d_priv->sched_entity[V3D_BIN], >>>    v3d_priv); >>> - if (ret) >>> + if (ret) { >>> + mutex_unlock(&v3d->sched_lock); >>   goto fail_unreserve; > I don't see any path where you would go to fail_unreserve with the > mutex not yet locked, so you could just fold the mutex_unlock into this > error path for a bit less code duplication. > > Otherwise this looks fine. Yeah, agree that could be cleaned up. I can't judge the correctness of the driver, but at least the scheduler handling looks good to me. Regards, Christian. > > Regards, > Lucas > >> + } >> >>>   exec->bin_done_fence = >>>   dma_fence_get(&exec->bin.base.s_fence->finished); >> @@ -570,12 +573,15 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data, >>>    &v3d->queue[V3D_RENDER].sched, >>>    &v3d_priv->sched_entity[V3D_RENDER], >>>    v3d_priv); >>> - if (ret) >>> + if (ret) { >>> + mutex_unlock(&v3d->sched_lock); >>>   goto fail_unreserve; >>> + } >> >>>   kref_get(&exec->refcount); /* put by scheduler job completion */ >>>   drm_sched_entity_push_job(&exec->render.base, >>>     &v3d_priv->sched_entity[V3D_RENDER]); >>> + mutex_unlock(&v3d->sched_lock); >> >>>   v3d_attach_object_fences(exec); >> >> @@ -615,6 +621,7 @@ v3d_gem_init(struct drm_device *dev) >>>   spin_lock_init(&v3d->job_lock); >>>   mutex_init(&v3d->bo_lock); >>>   mutex_init(&v3d->reset_lock); >>> + mutex_init(&v3d->sched_lock); >> >>>   /* Note: We don't allocate address 0.  Various bits of HW >>>    * treat 0 as special, such as the occlusion query counters > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel