Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp6775043rwl; Wed, 22 Mar 2023 15:48:28 -0700 (PDT) X-Google-Smtp-Source: AK7set/VMzS/ChBA0jKGXfqEK2+fMM7XqABRjO+CyNFFyJbcDVdDZYazQ5cxpAforbfdbDhbjUZU X-Received: by 2002:a05:6a20:65a9:b0:cb:c266:3f6b with SMTP id p41-20020a056a2065a900b000cbc2663f6bmr1217459pzh.12.1679525308326; Wed, 22 Mar 2023 15:48:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679525308; cv=none; d=google.com; s=arc-20160816; b=TCc3SDlezHn27takwVbprK4mCwcw/mIZRTameiAoaBovKGZE9IozRGaVmBN7Je6Iye lbuNnCp5PQPAjROzGuAekkSze+LREUwU/WDkMSYKYK9cu4yOHeSaYwQLWa9uW8DT/gcn Np75TjiWfmvZD/YlkBc9edfAQCHmZCekb2BwP54KGnpkJF1oIIca9Wk7MiJE4eWoMHad fHkgkxYu5aD9dspzhgR4t15oFT8ppkR91d9sPPJaOTgm2CFvF42OuHsjpycxZOG2AyKE KcaBHBcMvYXYOLh4Dg1tTkmXI1dZ3pXCRAmU3Q5w8e/Ohr2aLw6cj6W/0bPn1O559J9u 02Ug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=u7IFLnv2wp+6+M4RDOrDXlUdlj/4yQHYPm5exES9iSg=; b=GOynil38eBam/TQcPTysAbu/dA0LV52Y1GBAhQSg1UH5ni5ibfANBINOCVwFuSoHC6 5CTNAewquouOj3EG+r4fA2DH5vYIBLVS1jh8mjYX4BKPQqocdeMYuYHUS+BRu/rLr9ZC l42MK7y0aAxxVracxYDjEAtyCNw6+xpx7Gz/Mq3q9rK54kgZ9+tyNSU8w9KQSaFLy6tM gi3LzbzuF1BaX+9s3i8XcE/BJoIulgwun0W2T4IWy8JiXavXzUSLt94hhYZhweP6xIl1 2j7qBSOcmsDgAijm5+a2l9qfgwlV3eSK60o0g25TUmmdEvShMXg67pGNu4ZYIaTX0wFC Rvww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=d0r7ofq5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v135-20020a63618d000000b0051309268f38si692586pgb.640.2023.03.22.15.48.17; Wed, 22 Mar 2023 15:48:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=d0r7ofq5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229639AbjCVWog (ORCPT + 99 others); Wed, 22 Mar 2023 18:44:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229497AbjCVWoR (ORCPT ); Wed, 22 Mar 2023 18:44:17 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBC306E90; Wed, 22 Mar 2023 15:44:16 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id c18so20628594ple.11; Wed, 22 Mar 2023 15:44:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679525056; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=u7IFLnv2wp+6+M4RDOrDXlUdlj/4yQHYPm5exES9iSg=; b=d0r7ofq5hY83b1KFqtk/khmAPHfvgWyeyA3MONIOqMO+HDlgKrSCSYHRBu9oUUxNh0 j4EqAQFrGaB9BN7ZmZnfRLFB26ZpAiJMEJnoQxyJHli0iP9bAT/iH7DVxyRFGD+bJPKJ SLBipXnpy6bZPQzbq/pKSi5ySkL0ewxUDi0/JHUXGcSOfS9ASAUCHoG5N3dKZR5lmYBn 8pDDQWpd8Og5i0XCV/oMlJ/mbTeh2P7zzsGjJeagqUcCa97pbKfSObAs2UtIl33p5/EE GHfsN5IyZuIzig3G4q4KZLyOfHa48jwasgb2TM9tordUu7H8Qafj78GDSaI4JfkXej/f 28qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679525056; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=u7IFLnv2wp+6+M4RDOrDXlUdlj/4yQHYPm5exES9iSg=; b=kK4TUdQBl/KLtXYadX8Eh56a2lJAtrdzCrzDDqr4Ikaq5f/ENVRNSzzG4BFuFNoMqW Ya5k0AmLPeunMhob1TBFYQ4d3pwobKsKE7WXv+yp81P33+B+dciykovKCSjEGvHKHnm/ BcmRD3uarGx6sC4gs5SOFWDejFGDOgMYZm6KnLix7Zb+2WtUOOVX5/41Olk6zXUMxW3V UN9vm9Xb59R3D5mpoHe6hXh3tQOk/XbnvPH/A4uCQ1yS1qb8UIt20zVjBQCMNwdrxB6f lKsGV4rR+WMgntpRGT9xjJYG4aMORXYEzMbbLv+LWPMlH5b4u+C68p3JuwXn5/0CVRZl z5ZA== X-Gm-Message-State: AO0yUKUJg3uZCv6A5iZgrzJcKBpkNxcTr8J+51LPssJzzV3GcCawWJKg nR1a3mGgdoR/CktVzIoOGoI= X-Received: by 2002:a17:90b:3803:b0:234:a9df:db96 with SMTP id mq3-20020a17090b380300b00234a9dfdb96mr5338431pjb.33.1679525056271; Wed, 22 Mar 2023 15:44:16 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id p5-20020a1709026b8500b001a1aeb3a7a9sm9798689plk.137.2023.03.22.15.44.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Mar 2023 15:44:15 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Luben Tuikov , David Airlie , Daniel Vetter , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK) Subject: [RFC] drm/scheduler: Unwrap job dependencies Date: Wed, 22 Mar 2023 15:44:03 -0700 Message-Id: <20230322224403.35742-1-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Rob Clark Container fences have burner contexts, which makes the trick to store at most one fence per context somewhat useless if we don't unwrap array or chain fences. Signed-off-by: Rob Clark --- tbh, I'm not sure why we weren't doing this already, unless there is something I'm overlooking drivers/gpu/drm/scheduler/sched_main.c | 43 +++++++++++++++++--------- 1 file changed, 28 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index c2ee44d6224b..f59e5335afbb 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -41,20 +41,21 @@ * 4. Entities themselves maintain a queue of jobs that will be scheduled on * the hardware. * * The jobs in a entity are always scheduled in the order that they were pushed. */ #include #include #include #include +#include #include #include #include #include #include #include #define CREATE_TRACE_POINTS #include "gpu_scheduler_trace.h" @@ -665,41 +666,27 @@ void drm_sched_job_arm(struct drm_sched_job *job) sched = entity->rq->sched; job->sched = sched; job->s_priority = entity->rq - sched->sched_rq; job->id = atomic64_inc_return(&sched->job_id_count); drm_sched_fence_init(job->s_fence, job->entity); } EXPORT_SYMBOL(drm_sched_job_arm); -/** - * drm_sched_job_add_dependency - adds the fence as a job dependency - * @job: scheduler job to add the dependencies to - * @fence: the dma_fence to add to the list of dependencies. - * - * Note that @fence is consumed in both the success and error cases. - * - * Returns: - * 0 on success, or an error on failing to expand the array. - */ -int drm_sched_job_add_dependency(struct drm_sched_job *job, - struct dma_fence *fence) +static int _add_dependency(struct drm_sched_job *job, struct dma_fence *fence) { struct dma_fence *entry; unsigned long index; u32 id = 0; int ret; - if (!fence) - return 0; - /* Deduplicate if we already depend on a fence from the same context. * This lets the size of the array of deps scale with the number of * engines involved, rather than the number of BOs. */ xa_for_each(&job->dependencies, index, entry) { if (entry->context != fence->context) continue; if (dma_fence_is_later(fence, entry)) { dma_fence_put(entry); @@ -709,20 +696,46 @@ int drm_sched_job_add_dependency(struct drm_sched_job *job, } return 0; } ret = xa_alloc(&job->dependencies, &id, fence, xa_limit_32b, GFP_KERNEL); if (ret != 0) dma_fence_put(fence); return ret; } + +/** + * drm_sched_job_add_dependency - adds the fence as a job dependency + * @job: scheduler job to add the dependencies to + * @fence: the dma_fence to add to the list of dependencies. + * + * Note that @fence is consumed in both the success and error cases. + * + * Returns: + * 0 on success, or an error on failing to expand the array. + */ +int drm_sched_job_add_dependency(struct drm_sched_job *job, + struct dma_fence *fence) +{ + struct dma_fence_unwrap iter; + struct dma_fence *f; + int ret = 0; + + dma_fence_unwrap_for_each (f, &iter, fence) { + ret = _add_dependency(job, f); + if (ret) + break; + } + + return ret; +} EXPORT_SYMBOL(drm_sched_job_add_dependency); /** * drm_sched_job_add_resv_dependencies - add all fences from the resv to the job * @job: scheduler job to add the dependencies to * @resv: the dma_resv object to get the fences from * @usage: the dma_resv_usage to use to filter the fences * * This adds all fences matching the given usage from @resv to @job. * Must be called with the @resv lock held. -- 2.39.2