Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp715682yba; Mon, 1 Apr 2019 15:28:14 -0700 (PDT) X-Google-Smtp-Source: APXvYqw5Z39AzB7q8Ma0X4/1ZsKYPKksW5inBt0OaTS1IoH1QJZfC2Z95IFlxwkTF8z8Mr/GndaZ X-Received: by 2002:a63:4101:: with SMTP id o1mr10748655pga.17.1554157694422; Mon, 01 Apr 2019 15:28:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554157694; cv=none; d=google.com; s=arc-20160816; b=nhCnjGj8nDEo+HrQQRH9VkM8ISP/LDVwqBFnZfh6QzwDXK7MxdpX7t6rc9v5DJBMnQ Nn1cN4eZzlIiyo1POa+v1ira9GyLi8WmZ9i0YQdcXS8GzRyDX1pRBu2imNvAO3Hn4uTP 0Bwryelk1quZYyXLDeP2TNu5U20BhLC80x78Ntzlx7Me+LCJLUOh52zG0g/kDX6tr/Kx K5LYBkgs+BYalzcbexYd7NHA2yz2x48g3V3+N0ZauMC/Ei2DcxeRgGJzMp/TkOs6IUM/ KZ4hrCIkSEqt5+4C0vD/RPSqK8k+JHZU5jLutG2RmIE5855R1NRcIKc4Yth4aX3pLxgm UhGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=TNSbO85frAtinkjMFcG+fur67K6WQ3ePtAbZstAWD6U=; b=e8kRCwSMhacxN44rQ0FTcsoNLbw0HDAFazKSElNN79b+PnbcoSJdVfMFcFcwhcSYzn gFK/MSDmGj82Dx3L9PxVcc9Kx7GsooqNby2K7yr4532hD72PJqiwLV5n0L4FX0yLWJm9 dod0/ipogUUZg8h5D+DKFswOZWqJEMQCFkqfBtgC21TpS/mmBRP49U0PPzO8qRqXd4Yq tETa1SeEl3FukYbGf6SPWIbvuuQCHoDXRYuDlHGRTRW0TBbpviIdEeaUV5DrPS8TTo3Z HSmpuwhIVa+AszdbwvGSWLHGQIQhSgeJDRC0Soo/FYSQfApcnuUwYSOVAxBnH4RpHHmL 6U7g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k4si9735500pll.170.2019.04.01.15.27.49; Mon, 01 Apr 2019 15:28:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728730AbfDAW1B (ORCPT + 99 others); Mon, 1 Apr 2019 18:27:01 -0400 Received: from anholt.net ([50.246.234.109]:48388 "EHLO anholt.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728667AbfDAW0y (ORCPT ); Mon, 1 Apr 2019 18:26:54 -0400 Received: from localhost (localhost [127.0.0.1]) by anholt.net (Postfix) with ESMTP id 0F9A410A28B1; Mon, 1 Apr 2019 15:26:54 -0700 (PDT) X-Virus-Scanned: Debian amavisd-new at anholt.net Received: from anholt.net ([127.0.0.1]) by localhost (kingsolver.anholt.net [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 1IoNiA1NqU0f; Mon, 1 Apr 2019 15:26:49 -0700 (PDT) Received: from eliezer.anholt.net (localhost [127.0.0.1]) by anholt.net (Postfix) with ESMTP id 0E86710A2A79; Mon, 1 Apr 2019 15:26:38 -0700 (PDT) Received: by eliezer.anholt.net (Postfix, from userid 1000) id C546B2FE34D6; Mon, 1 Apr 2019 15:26:35 -0700 (PDT) From: Eric Anholt To: dri-devel@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, david.emett@broadcom.com, thomas.spurden@broadcom.com, Rob Herring , Qiang Yu , Eric Anholt Subject: [PATCH 5/7] drm: Add helpers for setting up an array of dma_fence dependencies. Date: Mon, 1 Apr 2019 15:26:33 -0700 Message-Id: <20190401222635.25013-6-eric@anholt.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190401222635.25013-1-eric@anholt.net> References: <20190401222635.25013-1-eric@anholt.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I needed to add implicit dependency support for v3d, and Rob Herring has been working on it for panfrost, and I had recently looked at the lima implementation so I think this will be a good intersection of what we all want and simplify our scheduler implementations. Signed-off-by: Eric Anholt --- drivers/gpu/drm/drm_gem.c | 94 +++++++++++++++++++++++++++++++++++++++ include/drm/drm_gem.h | 5 +++ 2 files changed, 99 insertions(+) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 388b3742e562..7b3c73135e49 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1311,3 +1311,97 @@ drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, ww_acquire_fini(acquire_ctx); } EXPORT_SYMBOL(drm_gem_unlock_reservations); + +/** + * drm_gem_fence_array_add - Adds the fence to an array of fences to be + * waited on, deduplicating fences from the same context. + * + * @fence_array array of dma_fence * for the job to block on. + * @fence the dma_fence to add to the list of dependencies. + * + * Returns: + * 0 on success, or an error on failing to expand the array. + */ +int drm_gem_fence_array_add(struct xarray *fence_array, + struct dma_fence *fence) +{ + struct dma_fence *entry; + unsigned long index; + u32 id = 0; + int ret; + + if (!fence) + return 0; + + /* Deduplicate if we already depend on a fence from the same context. + * This lets the size of the array of deps scale with the number of + * engines involved, rather than the number of BOs. + */ + xa_for_each(fence_array, index, entry) { + if (entry->context != fence->context) + continue; + + if (dma_fence_is_later(fence, entry)) { + dma_fence_put(entry); + xa_store(fence_array, index, fence, GFP_KERNEL); + } else { + dma_fence_put(fence); + } + return 0; + } + + ret = xa_alloc(fence_array, &id, UINT_MAX, fence, GFP_KERNEL); + if (ret != 0) + dma_fence_put(fence); + + return ret; +} +EXPORT_SYMBOL(drm_gem_fence_array_add); + +/** + * drm_gem_fence_array_add_implicit - Adds the implicit dependencies tracked + * in the GEM object's reservation object to an array of dma_fences for use in + * scheduling a rendering job. + * + * This should be called after drm_gem_lock_reservations() on your array of + * GEM objects used in the job but before updating the reservations with your + * own fences. + * + * @fence_array array of dma_fence * for the job to block on. + * @obj the gem object to add new dependencies from. + * @write whether the job might write the object (so we need to depend on + * shared fences in the reservation object). + */ +int drm_gem_fence_array_add_implicit(struct xarray *fence_array, + struct drm_gem_object *obj, + bool write) +{ + int ret; + struct dma_fence **fences; + unsigned i, fence_count; + + if (!write) { + struct dma_fence *fence = + reservation_object_get_excl_rcu(obj->resv); + + return drm_gem_fence_array_add(fence_array, fence); + } + + ret = reservation_object_get_fences_rcu(obj->resv, NULL, + &fence_count, &fences); + if (ret || !fence_count) + return ret; + + for (i = 0; i < fence_count; i++) { + ret = drm_gem_fence_array_add(fence_array, fences[i]); + if (ret) + break; + } + + for (; i < fence_count; i++) + dma_fence_put(fences[i]); + kfree(fences); + return ret; +} + +EXPORT_SYMBOL(drm_gem_fence_array_add_implicit); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 2955aaab3dca..e957753d3f94 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -388,6 +388,11 @@ int drm_gem_lock_reservations(struct drm_gem_object **objs, int count, struct ww_acquire_ctx *acquire_ctx); void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, struct ww_acquire_ctx *acquire_ctx); +int drm_gem_fence_array_add(struct xarray *fence_array, + struct dma_fence *fence); +int drm_gem_fence_array_add_implicit(struct xarray *fence_array, + struct drm_gem_object *obj, + bool write); int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, u32 handle, u64 *offset); int drm_gem_dumb_destroy(struct drm_file *file, -- 2.20.1