Received: by 10.213.65.68 with SMTP id h4csp1979469imn; Thu, 5 Apr 2018 07:05:04 -0700 (PDT) X-Google-Smtp-Source: AIpwx497RwqsZ93hrAywOtRtCNu6nMaXvD1ek9NmiZk9Y90xq6StxCzFP2j034bZj8CszXNjr9DZ X-Received: by 2002:a17:902:7889:: with SMTP id q9-v6mr23376413pll.237.1522937104150; Thu, 05 Apr 2018 07:05:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522937104; cv=none; d=google.com; s=arc-20160816; b=cebP0PGiqg8NWrtx9AB837vBt7K6iFJT9vX0aDQuRV0HJEH1jIL1SXjsbkLEPEjuHR cip3dDQmGNO9OFIy1/sifGmJfU7M7jW+zife3jJju57hB1XqV4BsJvljimfhk3PXclHr IQL3cCKipRnb5cmq3ylYpsazlghxa/QtJz5gE57qucoig5XOCjryYXyJDqh+e+za8fLv VaWlJzz7zgLMXz4CX6PF2hIngUc/rll99CzumYaQT6JB7ltfuNZI48EBRFYJ0zYbXfkH WDBBq/X+y357W/V4OLTImfHTd1XvAyoWKU1ZX+Ldxh7alvKZkTB+k2lVbqvPlR38o6sv iCSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=hSbYM2RwGV9yUjAYs/xPWnM9nglOWx9rZITJA3B8Ux8=; b=n5hrN3Z7vbUbBeOklQqu7jfORmFQTgEM+RgzNNdvYumDxcRJEAGHFOIWBbjAjDWNa0 lVnfhzSnwTKrmoNHEqkzYtnt+3RfkTBxJOtnPI8jOUUkFQGhAw7toNV3j1OZfLP/xST7 L1/N6tFVSSYic4KMsz9bBPASLNWT+Cfi0zr1MUfuqE555utKlU5mmCnsUf80l45KaN/e vWK/FHVtsPYU6ZEJSlA8RE6hBZB5WkEttEhoQLUKtg3owfNe8Ai2ruJqHBSNW6665crn kDGAwuaeZxWQZCFTD5tLZHb0hxYjuz1s+Qs9+OVE6l+yX6RnJKzWFIhuVaMGQKyI9CrG f8XA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@ffwll.ch header.s=google header.b=YE558g9r; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f6si5569427pgr.690.2018.04.05.07.04.50; Thu, 05 Apr 2018 07:05:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@ffwll.ch header.s=google header.b=YE558g9r; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751451AbeDEODa (ORCPT + 99 others); Thu, 5 Apr 2018 10:03:30 -0400 Received: from mail-io0-f170.google.com ([209.85.223.170]:41845 "EHLO mail-io0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751195AbeDEOD3 (ORCPT ); Thu, 5 Apr 2018 10:03:29 -0400 Received: by mail-io0-f170.google.com with SMTP id m83so30715067ioi.8 for ; Thu, 05 Apr 2018 07:03:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=hSbYM2RwGV9yUjAYs/xPWnM9nglOWx9rZITJA3B8Ux8=; b=YE558g9r0/CGedPBECSCDvoVvJmUNV5wG16l9sfZ01LjYrrzgAy/vgD4a9xxy2hRxF 2kU9bqaUEdZsSws3K0LYK82ZCbBHCxtNNaGv5ppiPbki3XfpJ5ahhnnQXKQzBKcmC6i6 crte+memXQgpUCRMiBPRqzZ8qWzIYGJw/+koY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=hSbYM2RwGV9yUjAYs/xPWnM9nglOWx9rZITJA3B8Ux8=; b=CsR09FLWvaDmgcYsS4JD8gd8fnadzi9VfPnBUTzMHZcRkqjZHSTAWYwnsonqLyqNIY GV71DPKe3TjWGkloiRgkLCVmyoWyYZBr5w+O85+YQU8CoFQLWQcjgtKGzjnFjEWRWk8n feCz6yKuN8r7Fs7AEBYcIVNiU/IigjCXVroNxz0/HZMlzp/vh63IxsAoY9qH+q1pdrYi 5B3j34fRQ+ns+wWvF1kuOsP6Av73lvAh97zovqRaUYJ/SJa93T9Z0+hryGKqQUX8+/M+ zmUUigEB3IotDglLTf4D4vJ91fe2JYJEnYRo8LrxhEED+VCSwyVA5DpZjXzA09AgM7eY DsWQ== X-Gm-Message-State: ALQs6tB4cQeXbzVFr8X2SIaie3pxsG2X0CG5hHbD67w9J1V3xZ75YHCh UlXGmRjXzZp1PRb9IKJxSxyXqfaeqhbcx4VRvhOe+A== X-Received: by 10.107.197.130 with SMTP id v124mr21369177iof.55.1522937008268; Thu, 05 Apr 2018 07:03:28 -0700 (PDT) MIME-Version: 1.0 Received: by 10.79.40.129 with HTTP; Thu, 5 Apr 2018 07:03:27 -0700 (PDT) X-Originating-IP: [212.51.149.109] In-Reply-To: References: <20180404223251.28449-1-eric@anholt.net> From: Daniel Vetter Date: Thu, 5 Apr 2018 16:03:27 +0200 X-Google-Sender-Auth: jl3EQAbKCYhqhtFZmuRB898DNS0 Message-ID: Subject: Re: [PATCH] drm/sched: Extend the documentation. To: Alex Deucher Cc: Nayan Deshmukh , Alex Deucher , =?UTF-8?Q?Christian_K=C3=B6nig?= , dri-devel , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 5, 2018 at 3:44 PM, Alex Deucher wrote: > On Thu, Apr 5, 2018 at 9:41 AM, Nayan Deshmukh > wrote: >> On Thu, Apr 5, 2018 at 6:59 PM, Daniel Vetter wrote: >>> On Thu, Apr 5, 2018 at 3:27 PM, Alex Deucher wrote: >>>> On Thu, Apr 5, 2018 at 2:16 AM, Daniel Vetter wrote: >>>>> On Thu, Apr 5, 2018 at 12:32 AM, Eric Anholt wrote: >>>>>> These comments answer all the questions I had for myself when >>>>>> implementing a driver using the GPU scheduler. >>>>>> >>>>>> Signed-off-by: Eric Anholt >>>>> >>>>> Pulling all these comments into the generated kerneldoc would be >>>>> awesome, maybe as a new "GPU Scheduler" chapter at the end of >>>>> drm-mm.rst? Would mean a bit of busywork to convert the existing raw >>>>> comments into proper kerneldoc. Also has the benefit that 0day will >>>>> complain when you forget to update the comment when editing the >>>>> function prototype - kerneldoc which isn't included anywhere in .rst >>>>> won't be checked automatically. >>>> >>>> I was actually planning to do this myself, but Nayan wanted to do this >>>> a prep work for his proposed GSoC project so I was going to see how >>>> far he got first. >> >> It is still on my TODO list. Just got a bit busy with my coursework. I >> will try to look at it during the weekend. > > No worries. Take your time. > >>> >>> Awesome. I'm also happy to help out with any kerneldoc questions and >>> best practices. Technically ofc no clue about the scheduler :-) >>> >> I was thinking of adding a different rst for scheduler altogther. Will >> it be better to add it in drm-mm.rst itself? > > I had been planning to add a separate file too since it's a separate > entity. Do whatever you think works best. My recommendation is that to put the gist of the docs all into source-code comments. That way there's a much higher chance to spot them. In the docs you'll then only have the chapter structure (and not sure the scheduler needs more than 1 chapter). drm-mm.rst is a bit misnamed, since it contains all the stuff for handling rendering: MM, fences, dma-buf/prime, drm_syncobjs. I think scheduler fits very well in that topic range. We can ofc rename it to drm-rendering.rst or similar, if the -mm.rst is misleading (plus adjust the title). -Daniel > > Alex > >> >>> Cheers, Daniel >>>> Alex >>>> >>>>> -Daniel >>>>> >>>>>> --- >>>>>> include/drm/gpu_scheduler.h | 46 +++++++++++++++++++++++++++++++++---- >>>>>> 1 file changed, 42 insertions(+), 4 deletions(-) >>>>>> >>>>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h >>>>>> index dfd54fb94e10..c053a32341bf 100644 >>>>>> --- a/include/drm/gpu_scheduler.h >>>>>> +++ b/include/drm/gpu_scheduler.h >>>>>> @@ -43,10 +43,12 @@ enum drm_sched_priority { >>>>>> }; >>>>>> >>>>>> /** >>>>>> - * A scheduler entity is a wrapper around a job queue or a group >>>>>> - * of other entities. Entities take turns emitting jobs from their >>>>>> - * job queues to corresponding hardware ring based on scheduling >>>>>> - * policy. >>>>>> + * drm_sched_entity - A wrapper around a job queue (typically attached >>>>>> + * to the DRM file_priv). >>>>>> + * >>>>>> + * Entities will emit jobs in order to their corresponding hardware >>>>>> + * ring, and the scheduler will alternate between entities based on >>>>>> + * scheduling policy. >>>>>> */ >>>>>> struct drm_sched_entity { >>>>>> struct list_head list; >>>>>> @@ -78,7 +80,18 @@ struct drm_sched_rq { >>>>>> >>>>>> struct drm_sched_fence { >>>>>> struct dma_fence scheduled; >>>>>> + >>>>>> + /* This fence is what will be signaled by the scheduler when >>>>>> + * the job is completed. >>>>>> + * >>>>>> + * When setting up an out fence for the job, you should use >>>>>> + * this, since it's available immediately upon >>>>>> + * drm_sched_job_init(), and the fence returned by the driver >>>>>> + * from run_job() won't be created until the dependencies have >>>>>> + * resolved. >>>>>> + */ >>>>>> struct dma_fence finished; >>>>>> + >>>>>> struct dma_fence_cb cb; >>>>>> struct dma_fence *parent; >>>>>> struct drm_gpu_scheduler *sched; >>>>>> @@ -88,6 +101,13 @@ struct drm_sched_fence { >>>>>> >>>>>> struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); >>>>>> >>>>>> +/** >>>>>> + * drm_sched_job - A job to be run by an entity. >>>>>> + * >>>>>> + * A job is created by the driver using drm_sched_job_init(), and >>>>>> + * should call drm_sched_entity_push_job() once it wants the scheduler >>>>>> + * to schedule the job. >>>>>> + */ >>>>>> struct drm_sched_job { >>>>>> struct spsc_node queue_node; >>>>>> struct drm_gpu_scheduler *sched; >>>>>> @@ -112,10 +132,28 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job, >>>>>> * these functions should be implemented in driver side >>>>>> */ >>>>>> struct drm_sched_backend_ops { >>>>>> + /* Called when the scheduler is considering scheduling this >>>>>> + * job next, to get another struct dma_fence for this job to >>>>>> + * block on. Once it returns NULL, run_job() may be called. >>>>>> + */ >>>>>> struct dma_fence *(*dependency)(struct drm_sched_job *sched_job, >>>>>> struct drm_sched_entity *s_entity); >>>>>> + >>>>>> + /* Called to execute the job once all of the dependencies have >>>>>> + * been resolved. This may be called multiple times, if >>>>>> + * timedout_job() has happened and drm_sched_job_recovery() >>>>>> + * decides to try it again. >>>>>> + */ >>>>>> struct dma_fence *(*run_job)(struct drm_sched_job *sched_job); >>>>>> + >>>>>> + /* Called when a job has taken too long to execute, to trigger >>>>>> + * GPU recovery. >>>>>> + */ >>>>>> void (*timedout_job)(struct drm_sched_job *sched_job); >>>>>> + >>>>>> + /* Called once the job's finished fence has been signaled and >>>>>> + * it's time to clean it up. >>>>>> + */ >>>>>> void (*free_job)(struct drm_sched_job *sched_job); >>>>>> }; >>>>>> >>>>>> -- >>>>>> 2.17.0 >>>>>> >>>>>> _______________________________________________ >>>>>> dri-devel mailing list >>>>>> dri-devel@lists.freedesktop.org >>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel >>>>> >>>>> >>>>> >>>>> -- >>>>> Daniel Vetter >>>>> Software Engineer, Intel Corporation >>>>> +41 (0) 79 365 57 48 - http://blog.ffwll.ch >>>>> _______________________________________________ >>>>> dri-devel mailing list >>>>> dri-devel@lists.freedesktop.org >>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel >>> >>> >>> >>> -- >>> Daniel Vetter >>> Software Engineer, Intel Corporation >>> +41 (0) 79 365 57 48 - http://blog.ffwll.ch >>> _______________________________________________ >>> dri-devel mailing list >>> dri-devel@lists.freedesktop.org >>> https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch