Received: by 10.213.65.68 with SMTP id h4csp1246971imn; Wed, 4 Apr 2018 15:34:33 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+m44dsmac9a31JA4tz2sOUkHhx9+ob0lgLrZP1kkB7HO5TxoYbcoH9jIlkycztkL89UCVM X-Received: by 2002:a17:902:525:: with SMTP id 34-v6mr20178147plf.267.1522881273677; Wed, 04 Apr 2018 15:34:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522881273; cv=none; d=google.com; s=arc-20160816; b=DhCBGCPh3WK6EWitroS2ROCBTyizAK1OqkmQ3UMmBzc56MSF3pPD8EG7gVGli/p4FJ 88hlzJRmIUxJujrGnOn9oiQld0LiNTVyVyko9Ggmh3jrGN5L7ktEk3nQ9f4AQrQBIgf+ 59EzChvtSsvgpQymIZHKDefiqxxhpDj8E45xl5RLP7ofiKSwZ8uBQ4s3P/a3c34w5kMC PnRTMThVUs4QfagOnSMjZtdsWceLOL3Km0nZ1zlxP3Ln2kPQSsnQri5c797uWR+djvBm g2DMhyaFaatVSgBCbJf7MaMKk62XlYfmR49t0eEENJrsXHwc29nftXW//QCrMMgkwOAx F5QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=qapwoUI1HJ0xzC91owPD4ER2PUTjkgKaqZLste8p8W0=; b=gTrxgsDnstEa7K65hOyN/CQ8lX5BGDHHw3wqZgeHKDd4npZVqADaAOT/6yZTvdPYzw A4y3D78YMmq8zw4pQWzEupaG6eGRMxM5o1WZFxTRCoqEpaTePPWssH0QwoZCThkx3dF0 R/UHLGf+AfoLIO6o8ndjDnkdBdQbruPIm5lRRlCoJ00ORwrRGPhqqwcoAptD/1JU0UG4 RtyhKt/fSuWEhQSuyoSj4je2jK380KGifF6ITlZS0FShCcxLcvAHOxbcxc+n/9uofhIB b/cbu6cUuwxy17ee/3ejq23klsQxYoekLK9tYTHk/11Fn2olRuJeNW1EfV60yS6CQEAr tQyQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u1-v6si4811699plj.409.2018.04.04.15.34.19; Wed, 04 Apr 2018 15:34:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752581AbeDDWcz (ORCPT + 99 others); Wed, 4 Apr 2018 18:32:55 -0400 Received: from anholt.net ([50.246.234.109]:41304 "EHLO anholt.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752292AbeDDWcy (ORCPT ); Wed, 4 Apr 2018 18:32:54 -0400 Received: from localhost (localhost [127.0.0.1]) by anholt.net (Postfix) with ESMTP id 7A56610A1521; Wed, 4 Apr 2018 15:32:53 -0700 (PDT) X-Virus-Scanned: Debian amavisd-new at anholt.net Received: from anholt.net ([127.0.0.1]) by localhost (kingsolver.anholt.net [127.0.0.1]) (amavisd-new, port 10024) with LMTP id zfIjYeCcbLA4; Wed, 4 Apr 2018 15:32:52 -0700 (PDT) Received: from eliezer.anholt.net (localhost [127.0.0.1]) by anholt.net (Postfix) with ESMTP id 2F5F610A0F98; Wed, 4 Apr 2018 15:32:52 -0700 (PDT) Received: by eliezer.anholt.net (Postfix, from userid 1000) id A81172FE2D38; Wed, 4 Apr 2018 15:32:51 -0700 (PDT) From: Eric Anholt To: dri-devel@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, Lucas Stach , Alex Deucher , christian.koenig@amd.com, Eric Anholt Subject: [PATCH] drm/sched: Extend the documentation. Date: Wed, 4 Apr 2018 15:32:51 -0700 Message-Id: <20180404223251.28449-1-eric@anholt.net> X-Mailer: git-send-email 2.17.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org These comments answer all the questions I had for myself when implementing a driver using the GPU scheduler. Signed-off-by: Eric Anholt --- include/drm/gpu_scheduler.h | 46 +++++++++++++++++++++++++++++++++---- 1 file changed, 42 insertions(+), 4 deletions(-) diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index dfd54fb94e10..c053a32341bf 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -43,10 +43,12 @@ enum drm_sched_priority { }; /** - * A scheduler entity is a wrapper around a job queue or a group - * of other entities. Entities take turns emitting jobs from their - * job queues to corresponding hardware ring based on scheduling - * policy. + * drm_sched_entity - A wrapper around a job queue (typically attached + * to the DRM file_priv). + * + * Entities will emit jobs in order to their corresponding hardware + * ring, and the scheduler will alternate between entities based on + * scheduling policy. */ struct drm_sched_entity { struct list_head list; @@ -78,7 +80,18 @@ struct drm_sched_rq { struct drm_sched_fence { struct dma_fence scheduled; + + /* This fence is what will be signaled by the scheduler when + * the job is completed. + * + * When setting up an out fence for the job, you should use + * this, since it's available immediately upon + * drm_sched_job_init(), and the fence returned by the driver + * from run_job() won't be created until the dependencies have + * resolved. + */ struct dma_fence finished; + struct dma_fence_cb cb; struct dma_fence *parent; struct drm_gpu_scheduler *sched; @@ -88,6 +101,13 @@ struct drm_sched_fence { struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); +/** + * drm_sched_job - A job to be run by an entity. + * + * A job is created by the driver using drm_sched_job_init(), and + * should call drm_sched_entity_push_job() once it wants the scheduler + * to schedule the job. + */ struct drm_sched_job { struct spsc_node queue_node; struct drm_gpu_scheduler *sched; @@ -112,10 +132,28 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job, * these functions should be implemented in driver side */ struct drm_sched_backend_ops { + /* Called when the scheduler is considering scheduling this + * job next, to get another struct dma_fence for this job to + * block on. Once it returns NULL, run_job() may be called. + */ struct dma_fence *(*dependency)(struct drm_sched_job *sched_job, struct drm_sched_entity *s_entity); + + /* Called to execute the job once all of the dependencies have + * been resolved. This may be called multiple times, if + * timedout_job() has happened and drm_sched_job_recovery() + * decides to try it again. + */ struct dma_fence *(*run_job)(struct drm_sched_job *sched_job); + + /* Called when a job has taken too long to execute, to trigger + * GPU recovery. + */ void (*timedout_job)(struct drm_sched_job *sched_job); + + /* Called once the job's finished fence has been signaled and + * it's time to clean it up. + */ void (*free_job)(struct drm_sched_job *sched_job); }; -- 2.17.0