Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp2434571pxv; Sat, 17 Jul 2021 13:27:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzwPESoQ4ILhxwEEFx/d0dPY6ofI1OJ3iwH+HzBHmHKmoY71AaO86QHca/9lSXzDmOJ1uGk X-Received: by 2002:a17:906:70c9:: with SMTP id g9mr19293346ejk.286.1626553657749; Sat, 17 Jul 2021 13:27:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626553657; cv=none; d=google.com; s=arc-20160816; b=Xrknbos8xZwV1JHHoySfNuvRCeo8jjuP3vNSerr3aE2PiwNSZDvRxKdMpIqfF/y6Iw 93tCJziVTc8q8zHXtag/bbWor3Si/7o3v6jAXeFlBEQo7ylPM9fyuvjA9SfZuxmbpqRd JYD0XbIks0M01jj6RQVl19BCuVw6o6VPzh2LZeayYHsIhtgu0C0tBbUOP7I7/ANnjj5W 59ZAD53cygrjpKaBu6WJuqXmORSO40CFGLoo3JEGcOZi+cCvQKDWsEAlsWjp2CP2zdDk KJmbZKlZIy7xtXOVm87SEgg2GxCZO6kQZoz5LYPpm1Cv4DJmZk+QD9gvaMgHRiSpAEVh HJ2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ODKpfesIT/FQjHxu1Eovvel7JRH3XGvCiL9dn9t/i30=; b=M7l7eFeWRpDOtqPVsgaopAC4gAX3OnVZRHUGxaluC39Oe6Ds1sljpvngnPGGrKzR3v BZjRFB9PimMUx3VzjXnxzAJJzJgyyXvNOJPfmF0M2b+SyXDn2WWoOu+ZJV5XMu5yRFjd QNM/b3JxKTXE5AldqYiC9KEEt5H7v8nbQfdVPB5S1Al9XA0gLpXve1bwW9eYEI6pmrL6 wQ5Lj5Qsn9iod0hTOVzuGXPRmg8p47jpBR04d2TvKZdwH05Rz5NVOJe5/+PMxx7z500/ InitxVWhGF6mPr8e6N8+NltRx9xT17fZnpyLY4OsviMdrwJdyptcWHO5atplDNo+cPyF fGrA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=UL0BHQlz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y11si17016381ejq.155.2021.07.17.13.27.15; Sat, 17 Jul 2021 13:27:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=UL0BHQlz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235796AbhGQU2m (ORCPT + 99 others); Sat, 17 Jul 2021 16:28:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235728AbhGQU2i (ORCPT ); Sat, 17 Jul 2021 16:28:38 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30E9CC061766; Sat, 17 Jul 2021 13:25:41 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id p9so8896493pjl.3; Sat, 17 Jul 2021 13:25:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ODKpfesIT/FQjHxu1Eovvel7JRH3XGvCiL9dn9t/i30=; b=UL0BHQlzBNtBEsBmdVYoNWbWo4ijebn3z3h84auUlPWrK7zc4H6ICt36EYJUongJx+ P4sMUhd63r3UOx+5xnhXvfb136dlNal31vM87mTX3Od9Oy2r3MRfYh/Ix037aggT7SYO h0AZXKIAyfVsNGh9kBkIrVXsOyGUwfRcbMTAs+KBn4AMbQjxksDXULelK/zeNyQMch49 xe2ZEsmxMH4ZuJxtFrI/hWf5EvMglsF6VMzunswLMnSAOoLz3i57IC6LsKC3xuWVfNa7 XDDEPaaXCRgVOE7To6wQdCu1kmv+birEDCeXZ0g4RbxsmPY02YzKr0hOmLKaOEjs/ElV 3XZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ODKpfesIT/FQjHxu1Eovvel7JRH3XGvCiL9dn9t/i30=; b=a0PXb7qgQUR4Cpt50PXG7q6DuceZWbpFINtut9sNzFuAdZ4crEB9gJx+QfnNA5XQE+ UxQxf/XGfAfy5VVAIrg2E0/oeMBrei/S2y2T4+dazHeqr4xe0CBva4BzQAlgtWIuoGKX yH2mU94R66gIfdz5HEDSYnLp/UJtM0MTm0uR2De3kBYBvilNNpZBSd3DpTn5tABLLXH9 m23TWS2WGeiucGkJxR30gmxNGqPRNtVVxd+xq7ThsdTb3FJrI4EUnZMN/pcbH/uxIAmQ UVCQD4TnQkSYGL1oFwt0ZTla3Do6xkJ6i7CF8P0Eo0rKl5NImk3rXDMqskP8ZWL7MvtT oDaA== X-Gm-Message-State: AOAM53196pIwVUiN0B5PTBcRR1mvKn/GbVBUrADlZ94L+dh6iMUYTmRB NLHEyu7Xge8sBlDFGKImOiw= X-Received: by 2002:a17:90a:ea12:: with SMTP id w18mr16485851pjy.103.1626553540682; Sat, 17 Jul 2021 13:25:40 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id g2sm14223025pfv.91.2021.07.17.13.25.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 17 Jul 2021 13:25:40 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK) Subject: [PATCH 07/11] drm/msm: Track "seqno" fences by idr Date: Sat, 17 Jul 2021 13:29:09 -0700 Message-Id: <20210717202924.987514-8-robdclark@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210717202924.987514-1-robdclark@gmail.com> References: <20210717202924.987514-1-robdclark@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Rob Clark Previously the (non-fd) fence returned from submit ioctl was a raw seqno, which is scoped to the ring. But from UABI standpoint, the ioctls related to seqno fences all specify a submitqueue. We can take advantage of that to replace the seqno fences with a cyclic idr handle. This is in preperation for moving to drm scheduler, at which point the submit ioctl will return after queuing the submit job to the scheduler, but before the submit is written into the ring (and therefore before a ring seqno has been assigned). Which means we need to replace the dma_fence that userspace may need to wait on with a scheduler fence. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 30 +++++++++++++++++++-- drivers/gpu/drm/msm/msm_fence.c | 39 --------------------------- drivers/gpu/drm/msm/msm_fence.h | 2 -- drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_gem_submit.c | 23 +++++++++++++++- drivers/gpu/drm/msm/msm_gpu.h | 5 ++++ drivers/gpu/drm/msm/msm_submitqueue.c | 5 ++++ 7 files changed, 61 insertions(+), 44 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 9b8fa2ad0d84..1594ae39d54f 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -911,6 +911,7 @@ static int msm_ioctl_wait_fence(struct drm_device *dev, void *data, ktime_t timeout = to_ktime(args->timeout); struct msm_gpu_submitqueue *queue; struct msm_gpu *gpu = priv->gpu; + struct dma_fence *fence; int ret; if (args->pad) { @@ -925,10 +926,35 @@ static int msm_ioctl_wait_fence(struct drm_device *dev, void *data, if (!queue) return -ENOENT; - ret = msm_wait_fence(gpu->rb[queue->prio]->fctx, args->fence, &timeout, - true); + /* + * Map submitqueue scoped "seqno" (which is actually an idr key) + * back to underlying dma-fence + * + * The fence is removed from the fence_idr when the submit is + * retired, so if the fence is not found it means there is nothing + * to wait for + */ + ret = mutex_lock_interruptible(&queue->lock); + if (ret) + return ret; + fence = idr_find(&queue->fence_idr, args->fence); + if (fence) + fence = dma_fence_get_rcu(fence); + mutex_unlock(&queue->lock); + + if (!fence) + return 0; + ret = dma_fence_wait_timeout(fence, true, timeout_to_jiffies(&timeout)); + if (ret == 0) { + ret = -ETIMEDOUT; + } else if (ret != -ERESTARTSYS) { + ret = 0; + } + + dma_fence_put(fence); msm_submitqueue_put(queue); + return ret; } diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c index e58895603726..efa86807e622 100644 --- a/drivers/gpu/drm/msm/msm_fence.c +++ b/drivers/gpu/drm/msm/msm_fence.c @@ -39,45 +39,6 @@ static inline bool fence_completed(struct msm_fence_context *fctx, uint32_t fenc return (int32_t)(fctx->completed_fence - fence) >= 0; } -/* legacy path for WAIT_FENCE ioctl: */ -int msm_wait_fence(struct msm_fence_context *fctx, uint32_t fence, - ktime_t *timeout, bool interruptible) -{ - int ret; - - if (fence > fctx->last_fence) { - DRM_ERROR_RATELIMITED("%s: waiting on invalid fence: %u (of %u)\n", - fctx->name, fence, fctx->last_fence); - return -EINVAL; - } - - if (!timeout) { - /* no-wait: */ - ret = fence_completed(fctx, fence) ? 0 : -EBUSY; - } else { - unsigned long remaining_jiffies = timeout_to_jiffies(timeout); - - if (interruptible) - ret = wait_event_interruptible_timeout(fctx->event, - fence_completed(fctx, fence), - remaining_jiffies); - else - ret = wait_event_timeout(fctx->event, - fence_completed(fctx, fence), - remaining_jiffies); - - if (ret == 0) { - DBG("timeout waiting for fence: %u (completed: %u)", - fence, fctx->completed_fence); - ret = -ETIMEDOUT; - } else if (ret != -ERESTARTSYS) { - ret = 0; - } - } - - return ret; -} - /* called from workqueue */ void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence) { diff --git a/drivers/gpu/drm/msm/msm_fence.h b/drivers/gpu/drm/msm/msm_fence.h index 2d9af66dcca5..98d495ce2dee 100644 --- a/drivers/gpu/drm/msm/msm_fence.h +++ b/drivers/gpu/drm/msm/msm_fence.h @@ -24,8 +24,6 @@ struct msm_fence_context * msm_fence_context_alloc(struct drm_device *dev, const char *name); void msm_fence_context_free(struct msm_fence_context *fctx); -int msm_wait_fence(struct msm_fence_context *fctx, uint32_t fence, - ktime_t *timeout, bool interruptible); void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence); struct dma_fence * msm_fence_alloc(struct msm_fence_context *fctx); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index da3af702a6c8..e0579abda5b9 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -320,6 +320,7 @@ struct msm_gem_submit { struct ww_acquire_ctx ticket; uint32_t seqno; /* Sequence number of the submit on the ring */ struct dma_fence *fence; + int fence_id; /* key into queue->fence_idr */ struct msm_gpu_submitqueue *queue; struct pid *pid; /* submitting process */ bool fault_dumped; /* Limit devcoredump dumping to one per submit */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 4f02fa3c78f9..f6f595aae2c5 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -68,7 +68,14 @@ void __msm_gem_submit_destroy(struct kref *kref) container_of(kref, struct msm_gem_submit, ref); unsigned i; + if (submit->fence_id) { + mutex_lock(&submit->queue->lock); + idr_remove(&submit->queue->fence_idr, submit->fence_id); + mutex_unlock(&submit->queue->lock); + } + dma_fence_put(submit->fence); + put_pid(submit->pid); msm_submitqueue_put(submit->queue); @@ -872,6 +879,20 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; } + /* + * Allocate an id which can be used by WAIT_FENCE ioctl to map back + * to the underlying fence. + */ + mutex_lock(&queue->lock); + submit->fence_id = idr_alloc_cyclic(&queue->fence_idr, + submit->fence, 0, INT_MAX, GFP_KERNEL); + mutex_unlock(&queue->lock); + if (submit->fence_id < 0) { + ret = submit->fence_id = 0; + submit->fence_id = 0; + goto out; + } + if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) { struct sync_file *sync_file = sync_file_create(submit->fence); if (!sync_file) { @@ -886,7 +907,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, msm_gpu_submit(gpu, submit); - args->fence = submit->fence->seqno; + args->fence = submit->fence_id; msm_reset_syncobjs(syncobjs_to_reset, args->nr_in_syncobjs); msm_process_post_deps(post_deps, args->nr_out_syncobjs, diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index f3609eca5c8f..cc4e4a9ac1c2 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -221,6 +221,9 @@ struct msm_gpu_perfcntr { * which set of pgtables do submits jobs associated with the * submitqueue use) * @node: node in the context's list of submitqueues + * @fence_idr: maps fence-id to dma_fence for userspace visible fence + * seqno, protected by submitqueue lock + * @lock: submitqueue lock * @ref: reference count */ struct msm_gpu_submitqueue { @@ -230,6 +233,8 @@ struct msm_gpu_submitqueue { int faults; struct msm_file_private *ctx; struct list_head node; + struct idr fence_idr; + struct mutex lock; struct kref ref; }; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index fbea6e7adf40..75cded54d571 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -12,6 +12,8 @@ void msm_submitqueue_destroy(struct kref *kref) struct msm_gpu_submitqueue *queue = container_of(kref, struct msm_gpu_submitqueue, ref); + idr_destroy(&queue->fence_idr); + msm_file_private_put(queue->ctx); kfree(queue); @@ -89,6 +91,9 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, if (id) *id = queue->id; + idr_init(&queue->fence_idr); + mutex_init(&queue->lock); + list_add_tail(&queue->node, &ctx->submitqueues); write_unlock(&ctx->queuelock); -- 2.31.1