Received: by 2002:a25:824b:0:0:0:0:0 with SMTP id d11csp2119830ybn; Thu, 26 Sep 2019 07:19:20 -0700 (PDT) X-Google-Smtp-Source: APXvYqwFmglABQZ/SVPHMUWiA9MBPnJ8WiXO4S4gK7kmwyYkAtkAuGwHefUATWdK32aiVP7qvWuo X-Received: by 2002:a50:b884:: with SMTP id l4mr3867824ede.295.1569507560245; Thu, 26 Sep 2019 07:19:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569507560; cv=none; d=google.com; s=arc-20160816; b=ldB/1TWz3Ck+2wRLlrScxSKqwKGYuGLgK6ADxTZh0Xt1B40O89gqGJoIdO2IIa1D+r R1ksjOwSW8MEposxvwtOqwg8zu1/Djs+dQrC8/EfRVrFFzqso7bf9YkfdABsTzc+FQYv m28brQXEeOuH3g0AwFzBurgLNiG0ZHVf7VRjVByXFhQVeYxrtRpBYXVeJwm0VFTFgYs5 Bqo3IBxR3lv0YLK3UpE6+GRDm79nR6SI/lIhQS3KuPqN+pv9eZqZd9lsxmlKN2rgBFyl 9nojpKI6hsMbN3VJfrh9CTom7JG8/Wv5EQb7Pz987/v0qHmPfUQSfz8cs5JLD61zJsle xkHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=PkODjldb48wxghUfumXolSGk8zDhWn4bh71fK1Q3/BA=; b=JrNlyI6jW7LCMAJjDaGwDnPVZ/msJLRWn0DAvG/Vew1L/ksCYkENubC1DZPCnyRDAs RbIj/Q6bM5TQEViMzoFMKYUCbKMIYqbMipiBNWHN8/sdtF1sfegSj1MMXZ9nTLZPVkoK vJ+XdBebD0dkWCOja9FR1XJI73kYfQRuiht2mR8BLML2bam2zdITRUxXEIoeFwGnnT4B LXaoMqXZEJAn+PH04r4zE8n9nwFj3LSuh6I0HCYUhaDitUSI+aaHLuEWviJXsVxzTdWB uLJJKurlfADETZyleoa8RoGttOlJwfc0EHbzudGtl0cahqcWAJa64XJPvlscAiDkzSYW /V0A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d27si1501009ede.381.2019.09.26.07.18.55; Thu, 26 Sep 2019 07:19:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727111AbfIZOQt (ORCPT + 99 others); Thu, 26 Sep 2019 10:16:49 -0400 Received: from foss.arm.com ([217.140.110.172]:51160 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726500AbfIZOQt (ORCPT ); Thu, 26 Sep 2019 10:16:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5286628; Thu, 26 Sep 2019 07:16:48 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EF8753F534; Thu, 26 Sep 2019 07:16:46 -0700 (PDT) From: Steven Price To: Daniel Vetter , David Airlie , =?UTF-8?q?Christian=20K=C3=B6nig?= Cc: Steven Price , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alex Deucher , Andrey Grodzovsky , Nayan Deshmukh , Sharat Masetty Subject: [PATCH v4] drm: Don't free jobs in wait_event_interruptible() Date: Thu, 26 Sep 2019 15:16:30 +0100 Message-Id: <20190926141630.14258-1-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org drm_sched_cleanup_jobs() attempts to free finished jobs, however because it is called as the condition of wait_event_interruptible() it must not sleep. Unfortuantly some free callbacks (notibly for Panfrost) do sleep. Instead let's rename drm_sched_cleanup_jobs() to drm_sched_get_cleanup_job() and simply return a job for processing if there is one. The caller can then call the free_job() callback outside the wait_event_interruptible() where sleeping is possible before re-checking and returning to sleep if necessary. Signed-off-by: Steven Price --- Changes from v3: * drm_sched_main() re-arms the timeout for the next job after calling free_job() drivers/gpu/drm/scheduler/sched_main.c | 45 +++++++++++++++----------- 1 file changed, 26 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 9a0ee74d82dc..148468447ba9 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -622,43 +622,41 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) } /** - * drm_sched_cleanup_jobs - destroy finished jobs + * drm_sched_get_cleanup_job - fetch the next finished job to be destroyed * * @sched: scheduler instance * - * Remove all finished jobs from the mirror list and destroy them. + * Returns the next finished job from the mirror list (if there is one) + * ready for it to be destroyed. */ -static void drm_sched_cleanup_jobs(struct drm_gpu_scheduler *sched) +static struct drm_sched_job * +drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched) { + struct drm_sched_job *job = NULL; unsigned long flags; /* Don't destroy jobs while the timeout worker is running */ if (sched->timeout != MAX_SCHEDULE_TIMEOUT && !cancel_delayed_work(&sched->work_tdr)) - return; - + return NULL; - while (!list_empty(&sched->ring_mirror_list)) { - struct drm_sched_job *job; + spin_lock_irqsave(&sched->job_list_lock, flags); - job = list_first_entry(&sched->ring_mirror_list, + job = list_first_entry_or_null(&sched->ring_mirror_list, struct drm_sched_job, node); - if (!dma_fence_is_signaled(&job->s_fence->finished)) - break; - spin_lock_irqsave(&sched->job_list_lock, flags); + if (job && dma_fence_is_signaled(&job->s_fence->finished)) { /* remove job from ring_mirror_list */ list_del_init(&job->node); - spin_unlock_irqrestore(&sched->job_list_lock, flags); - - sched->ops->free_job(job); + } else { + job = NULL; + /* queue timeout for next job */ + drm_sched_start_timeout(sched); } - /* queue timeout for next job */ - spin_lock_irqsave(&sched->job_list_lock, flags); - drm_sched_start_timeout(sched); spin_unlock_irqrestore(&sched->job_list_lock, flags); + return job; } /** @@ -698,12 +696,21 @@ static int drm_sched_main(void *param) struct drm_sched_fence *s_fence; struct drm_sched_job *sched_job; struct dma_fence *fence; + struct drm_sched_job *cleanup_job = NULL; wait_event_interruptible(sched->wake_up_worker, - (drm_sched_cleanup_jobs(sched), + (cleanup_job = drm_sched_get_cleanup_job(sched)) || (!drm_sched_blocked(sched) && (entity = drm_sched_select_entity(sched))) || - kthread_should_stop())); + kthread_should_stop()); + + while (cleanup_job) { + sched->ops->free_job(cleanup_job); + /* queue timeout for next job */ + drm_sched_start_timeout(sched); + + cleanup_job = drm_sched_get_cleanup_job(sched); + } if (!entity) continue; -- 2.20.1