Received: by 10.213.65.68 with SMTP id h4csp1941578imn; Thu, 5 Apr 2018 06:29:27 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/rOlBoZRplBvV9gSv/+BVYaWHNuoqVbd24xe79KE0DC11RnL3pLNn1DrLAPXjHvHASOD9w X-Received: by 2002:a17:902:14cb:: with SMTP id y11-v6mr23338757plg.23.1522934967022; Thu, 05 Apr 2018 06:29:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522934966; cv=none; d=google.com; s=arc-20160816; b=CrmPfP1REQgGFUcel8iRD5K3gugGZCf3aP8R2uZ/DtyihNIsgSefgz49vmA1MdqEYS frAp/bArBuczN2g/iEAgCtRv51StmYBFlf72dxaM668qqXTmI3N3T+kkdMLM2NjGoyuQ nF82J32sxIkx45/v9kVd447bDild78JcyZenIpaKZ59ZUnFZPsghnUs0Y2YgDk1mnwim Zla2GmFvyEHIiFkFOZmUCNdtlDW2KklC5EYMKn0MktJuVC5RkHKp/kgCtR9z4Fy2qhLc 73cFFpQr834zuw0fBph0aqEIHLPa4uQ50caUgusbNi/pUfBDqhUzbEEu8oQVACATbrl6 nyCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=B7S0lYfbSN6ne9jTcRseo3r1wk3MuCWhmUbeZsor03w=; b=GQdm3bqOk3qUlE2ZgzmimrIYwz5gyzDAG+hbKz8gLOgINXBN9ApAh1JQkiRQQD5k3G +MofTRvobCsDIeQHVBPA0cVPBDJJYbzXHkti6n0ohlf6WY5Q1AziHnfzOK+vTbuTHGbA VLdDLFnuG6aaE9kIZD362aOIVCzlOUVimDeQwf6cP9qIWeZYS1dukoqWNNjKjvTV3pa/ J1UzDEufImd3uAF4Cxsc1cNGg5zab3lqI+y0d4Mw93hjtUa7yTezyWNw8L4IO5M3FYRj wpTqa8huvWtzEDxV40LOjFude1Yy6o5DDAYcP0aUP1Jj1mLMlf/7tRXtFwOugAZU+bAb X4yA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=Yx28lURx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c1-v6si6045055plk.611.2018.04.05.06.29.13; Thu, 05 Apr 2018 06:29:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=Yx28lURx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751618AbeDEN1k (ORCPT + 99 others); Thu, 5 Apr 2018 09:27:40 -0400 Received: from mail-wr0-f193.google.com ([209.85.128.193]:36704 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751277AbeDEN1i (ORCPT ); Thu, 5 Apr 2018 09:27:38 -0400 Received: by mail-wr0-f193.google.com with SMTP id y55so28411797wry.3 for ; Thu, 05 Apr 2018 06:27:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=B7S0lYfbSN6ne9jTcRseo3r1wk3MuCWhmUbeZsor03w=; b=Yx28lURxxGtDkWOcVvKR8BxN4JE6ymmOTUhoEd4rcAEibZYpm/6d8XrPrT6KI22Pws OdIPWEb08CWqr2FyFFQwV/ASC0Xe8sDBHw53qe5mwJhA6moV0qAmV6nG0RYXhrZBxRDk lDZbHFi/Pt8laGiHTI2tA6ozsJLXoFnY3Y/cw2if2mGDMM9MnttoOdmjncSFZjoAxyXt UgD911bTCVZvAyK+jwssl0g2RMLo0Ly5KwfrkoV2aBa+7ynISNGb2+o1Hye5545Ja/53 iATnfluOaPi0RoGghr+4xP1XGFQ9ogaPk6qd1w57nScGg8afdzTTFTP8tv7iwbWv2c7T 1k8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=B7S0lYfbSN6ne9jTcRseo3r1wk3MuCWhmUbeZsor03w=; b=I2+Rt/orw103m0wR5/g+l0kDcNMvL0bbht+a/gyQYbx5+oD2Rcvdv+d6QVELMb2Tvw sDyk99MUgN5qdbp6dMJVffpp753a8Oz2dJFQ8xTpGmGPKyJDuSDnY0WtBap07SwAjw+/ oXaexrrtUh78rrHRzot3DVhywOt0zUpEQQjC/JkcsFe/h48p1HQtgeoX0MMkUrHZFyh/ eyfvv9B9UAoGAEowaOxrCcr93lX3+mmuj88qxAfqSu1ILbCI2VqIDcowT0jRoqtRmLNn QQQttZiQXfwuY3DAJY5VMDGi63UJNXSziuRE/u4XA35TIzdqOx4MzldW7vLGqEsBUMvA yAKQ== X-Gm-Message-State: AElRT7EMUvkKxgZGQRKdxJAuyAfvUlLqyKRHcORBKL6woac5YMt2AFnK XY+IAOZ8RgZq11j6XU05+QA/t8j6pYlrntd3bmw= X-Received: by 10.223.175.211 with SMTP id y19mr17477661wrd.139.1522934856972; Thu, 05 Apr 2018 06:27:36 -0700 (PDT) MIME-Version: 1.0 Received: by 10.223.161.29 with HTTP; Thu, 5 Apr 2018 06:27:36 -0700 (PDT) In-Reply-To: References: <20180404223251.28449-1-eric@anholt.net> From: Alex Deucher Date: Thu, 5 Apr 2018 09:27:36 -0400 Message-ID: Subject: Re: [PATCH] drm/sched: Extend the documentation. To: Daniel Vetter Cc: Eric Anholt , Alex Deucher , =?UTF-8?Q?Christian_K=C3=B6nig?= , dri-devel , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 5, 2018 at 2:16 AM, Daniel Vetter wrote: > On Thu, Apr 5, 2018 at 12:32 AM, Eric Anholt wrote: >> These comments answer all the questions I had for myself when >> implementing a driver using the GPU scheduler. >> >> Signed-off-by: Eric Anholt > > Pulling all these comments into the generated kerneldoc would be > awesome, maybe as a new "GPU Scheduler" chapter at the end of > drm-mm.rst? Would mean a bit of busywork to convert the existing raw > comments into proper kerneldoc. Also has the benefit that 0day will > complain when you forget to update the comment when editing the > function prototype - kerneldoc which isn't included anywhere in .rst > won't be checked automatically. I was actually planning to do this myself, but Nayan wanted to do this a prep work for his proposed GSoC project so I was going to see how far he got first. Alex > -Daniel > >> --- >> include/drm/gpu_scheduler.h | 46 +++++++++++++++++++++++++++++++++---- >> 1 file changed, 42 insertions(+), 4 deletions(-) >> >> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h >> index dfd54fb94e10..c053a32341bf 100644 >> --- a/include/drm/gpu_scheduler.h >> +++ b/include/drm/gpu_scheduler.h >> @@ -43,10 +43,12 @@ enum drm_sched_priority { >> }; >> >> /** >> - * A scheduler entity is a wrapper around a job queue or a group >> - * of other entities. Entities take turns emitting jobs from their >> - * job queues to corresponding hardware ring based on scheduling >> - * policy. >> + * drm_sched_entity - A wrapper around a job queue (typically attached >> + * to the DRM file_priv). >> + * >> + * Entities will emit jobs in order to their corresponding hardware >> + * ring, and the scheduler will alternate between entities based on >> + * scheduling policy. >> */ >> struct drm_sched_entity { >> struct list_head list; >> @@ -78,7 +80,18 @@ struct drm_sched_rq { >> >> struct drm_sched_fence { >> struct dma_fence scheduled; >> + >> + /* This fence is what will be signaled by the scheduler when >> + * the job is completed. >> + * >> + * When setting up an out fence for the job, you should use >> + * this, since it's available immediately upon >> + * drm_sched_job_init(), and the fence returned by the driver >> + * from run_job() won't be created until the dependencies have >> + * resolved. >> + */ >> struct dma_fence finished; >> + >> struct dma_fence_cb cb; >> struct dma_fence *parent; >> struct drm_gpu_scheduler *sched; >> @@ -88,6 +101,13 @@ struct drm_sched_fence { >> >> struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); >> >> +/** >> + * drm_sched_job - A job to be run by an entity. >> + * >> + * A job is created by the driver using drm_sched_job_init(), and >> + * should call drm_sched_entity_push_job() once it wants the scheduler >> + * to schedule the job. >> + */ >> struct drm_sched_job { >> struct spsc_node queue_node; >> struct drm_gpu_scheduler *sched; >> @@ -112,10 +132,28 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job, >> * these functions should be implemented in driver side >> */ >> struct drm_sched_backend_ops { >> + /* Called when the scheduler is considering scheduling this >> + * job next, to get another struct dma_fence for this job to >> + * block on. Once it returns NULL, run_job() may be called. >> + */ >> struct dma_fence *(*dependency)(struct drm_sched_job *sched_job, >> struct drm_sched_entity *s_entity); >> + >> + /* Called to execute the job once all of the dependencies have >> + * been resolved. This may be called multiple times, if >> + * timedout_job() has happened and drm_sched_job_recovery() >> + * decides to try it again. >> + */ >> struct dma_fence *(*run_job)(struct drm_sched_job *sched_job); >> + >> + /* Called when a job has taken too long to execute, to trigger >> + * GPU recovery. >> + */ >> void (*timedout_job)(struct drm_sched_job *sched_job); >> + >> + /* Called once the job's finished fence has been signaled and >> + * it's time to clean it up. >> + */ >> void (*free_job)(struct drm_sched_job *sched_job); >> }; >> >> -- >> 2.17.0 >> >> _______________________________________________ >> dri-devel mailing list >> dri-devel@lists.freedesktop.org >> https://lists.freedesktop.org/mailman/listinfo/dri-devel > > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > +41 (0) 79 365 57 48 - http://blog.ffwll.ch > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel