Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp609332pxx; Thu, 29 Oct 2020 10:03:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxyTYgUECvjl+RbF5bSO7EJbwaVDTd9FVIV1gYr1xAAWnE8LMdio8lzgnZlcSsXhbaoJVwQ X-Received: by 2002:a17:906:7e0b:: with SMTP id e11mr3038469ejr.217.1603991025617; Thu, 29 Oct 2020 10:03:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603991025; cv=none; d=google.com; s=arc-20160816; b=KqXQkWWrF1aHp4oDkzd7QrfFzzRHAZDQiDwc0QRBROb+VT0s4UfQl8QWjQdpSjACQi YqqI60hrLyTVucff+/pouEpkJSNUVNvCrWBgf5X/fDj1dK485ygO/BdCsY6lTQrkJ+BS I0E4xv4pKerQfc7Pis8+SFaTDBnZXnY1BUMPYjow5NLzmsub84HBdaRYM6+//ZWnVwQg gdCMMqDAUNEbVSzg2HvmGde/sKIijq+ypbtjn4tZ+pXREKw7P5ECDoVVgcTe9uv9uRbl g81OnnF+NvjdVP6ufOuP48dIgckHHc1VmMWjqYeekVq+cRCHAh6AJXH87cm7WNr8VdZs 4h7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=Rwl3M2iHHZ1BR7Uxe0DBocjWDbu02mlvJSGqv/VCiuM=; b=ElceZbMvBCD1NgqgRGEA1L7CeaEk2BDsyQVvoo9ziyfrw4pM+PzvlaqTGE6PBTtXP5 1sL7yWcQeMWkBPvgTUCveRjMFwf87yL1lQVJqC+tD7OxqtKrvJ6/3Dgi95ZI7+HerAti gCa0mhv69oISuYyM0UlqyMnOVEwmSS0/JvwCIwIo17II3o+sRP9D6X89fkpmtyo0la+R Wrj8cPBbawRgYpyDbnVXVA3fDc0h3vkuk4qPQDPzUmZJAumSULuwpLarD6ZSm0HuxqmT o7lu24z948MLQrf8zR7/rXRQDJHbFkcR8OwzXOS6iFtpd4dBCr1QDIJH26uvfsuNwFT6 vijw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p23si2899328edw.241.2020.10.29.10.03.22; Thu, 29 Oct 2020 10:03:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726590AbgJ2RBJ (ORCPT + 99 others); Thu, 29 Oct 2020 13:01:09 -0400 Received: from foss.arm.com ([217.140.110.172]:40990 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725938AbgJ2RBJ (ORCPT ); Thu, 29 Oct 2020 13:01:09 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 97634139F; Thu, 29 Oct 2020 10:01:08 -0700 (PDT) Received: from e112269-lin.arm.com (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 499E63F66E; Thu, 29 Oct 2020 10:01:07 -0700 (PDT) From: Steven Price To: Daniel Vetter , David Airlie , Rob Herring , Tomeu Vizoso Cc: Alyssa Rosenzweig , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Steven Price , Boris Brezillon Subject: [PATCH] drm/panfrost: Don't corrupt the queue mutex on open/close Date: Thu, 29 Oct 2020 17:00:47 +0000 Message-Id: <20201029170047.30564-1-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The mutex within the panfrost_queue_state should have the lifetime of the queue, however it was erroneously initialised/destroyed during panfrost_job_{open,close} which is called every time a client opens/closes the drm node. Move the initialisation/destruction to panfrost_job_{init,fini} where it belongs. Fixes: 1a11a88cfd9a ("drm/panfrost: Fix job timeout handling") Signed-off-by: Steven Price Reviewed-by: Boris Brezillon --- drivers/gpu/drm/panfrost/panfrost_job.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index cfb431624eea..145ad37eda6a 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -595,6 +595,8 @@ int panfrost_job_init(struct panfrost_device *pfdev) } for (j = 0; j < NUM_JOB_SLOTS; j++) { + mutex_init(&js->queue[j].lock); + js->queue[j].fence_context = dma_fence_context_alloc(1); ret = drm_sched_init(&js->queue[j].sched, @@ -625,8 +627,10 @@ void panfrost_job_fini(struct panfrost_device *pfdev) job_write(pfdev, JOB_INT_MASK, 0); - for (j = 0; j < NUM_JOB_SLOTS; j++) + for (j = 0; j < NUM_JOB_SLOTS; j++) { drm_sched_fini(&js->queue[j].sched); + mutex_destroy(&js->queue[j].lock); + } } @@ -638,7 +642,6 @@ int panfrost_job_open(struct panfrost_file_priv *panfrost_priv) int ret, i; for (i = 0; i < NUM_JOB_SLOTS; i++) { - mutex_init(&js->queue[i].lock); sched = &js->queue[i].sched; ret = drm_sched_entity_init(&panfrost_priv->sched_entity[i], DRM_SCHED_PRIORITY_NORMAL, &sched, @@ -657,7 +660,6 @@ void panfrost_job_close(struct panfrost_file_priv *panfrost_priv) for (i = 0; i < NUM_JOB_SLOTS; i++) { drm_sched_entity_destroy(&panfrost_priv->sched_entity[i]); - mutex_destroy(&js->queue[i].lock); /* Ensure any timeouts relating to this client have completed */ flush_delayed_work(&js->queue[i].sched.work_tdr); } -- 2.20.1