Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp40645ybh; Tue, 17 Mar 2020 17:43:19 -0700 (PDT) X-Google-Smtp-Source: ADFU+vvMY004/CkZ8f0/qkaQC7u0kfEcM475uCd5da9xs/UHIgqjMD+0pxhvjQodyASuwBTLV9fD X-Received: by 2002:a9d:60b:: with SMTP id 11mr1744101otn.126.1584492199626; Tue, 17 Mar 2020 17:43:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584492199; cv=none; d=google.com; s=arc-20160816; b=kb/8kWtP0E2VVXfIjobvx0BkcCuSu+h5u36Y3QJX/Msv3dgoUIPRpHrIyw34LTq0EA yltVDCIj5AyxRW1b+KmEP+pW9+h70Sr0ztJgwQJhWMLrBLVIUYl4OP9HEQMtOM2GM1EH vCQqxA6dVu4hO3H73Wd6e8TCUIAy7CSodGBs+swEODs56hwTEFPGr1xnAf2SjPzsO2L8 Jjc/bMTKhxRGgaaMPnkprYFzrXJG5Bl6IvyT+UXF5O34NA6H6VT2iHzSHlsb5EEYU7lx 0tdG2v9VeRp1LEszGZTtsGO+6lTEDKEMexaevt4r53/Anvc/gItFHBVD5scB70/FsZtI TeKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=CrJr5DDzY7He3+ZxkXY02+ZvmcciwemGomoVbHNJovQ=; b=We3zlY+Tdyy28J9I9y+lBIrrSk/iIPKEiCbc7lguun8KTXS47lrvJcVCDywxj2hWmC F37nUUwmOWTBl/ZvCBRQ1uxSRbKmMAENjaDS8AEPgGuI2EMjCdMj2eCEauMmr+353cOR GTos7B62PBj28d/wfWOEnogICfHwgfyO7W5yBXEFvQdvTW9xDSPYBhBFnWX5PemcwOjd Nq4CMKBX6Z1wA4br1404ZAFu+tssfYhkZ0VkPvox+zsqCqnDXqMCzXz4ThuAh+JHcfej vlmadxl1psEt/J4leiD27JaeDRC/axJNURYvCnS/rWxFBp/qwVuM+sCJzbRl/oh9Ouid WJAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=dtg47APB; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e7si2700670oti.301.2020.03.17.17.43.07; Tue, 17 Mar 2020 17:43:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=dtg47APB; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727387AbgCRAmh (ORCPT + 99 others); Tue, 17 Mar 2020 20:42:37 -0400 Received: from us-smtp-delivery-74.mimecast.com ([216.205.24.74]:53614 "EHLO us-smtp-delivery-74.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727069AbgCRAme (ORCPT ); Tue, 17 Mar 2020 20:42:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1584492153; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CrJr5DDzY7He3+ZxkXY02+ZvmcciwemGomoVbHNJovQ=; b=dtg47APBpxu9u+c/FkNT9JXgkhachze2WlhL/iEMZ7F2zcazvwJKQLToPVt0TchMGlF/2b moJNtCOUgznS1KMtIp5OMaTBfL4Py+0adKri/gwM48LBvkZ3EX1H7QWLTwO9SzY2Uu5gyC LiZFLQxlNTMtCA/h7W3P2ljOHMdY9ws= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-399-Cwg8FdcCPyKvqwVsKbqM1g-1; Tue, 17 Mar 2020 20:42:31 -0400 X-MC-Unique: Cwg8FdcCPyKvqwVsKbqM1g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 93B9318FF660; Wed, 18 Mar 2020 00:42:29 +0000 (UTC) Received: from whitewolf.redhat.com (ovpn-113-173.rdu2.redhat.com [10.10.113.173]) by smtp.corp.redhat.com (Postfix) with ESMTP id 532F960BE0; Wed, 18 Mar 2020 00:42:28 +0000 (UTC) From: Lyude Paul To: nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: =?UTF-8?q?Ville=20Syrj=C3=A4l=C3=A4?= , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org Subject: [PATCH 1/9] drm/vblank: Add vblank works Date: Tue, 17 Mar 2020 20:40:58 -0400 Message-Id: <20200318004159.235623-2-lyude@redhat.com> In-Reply-To: <20200318004159.235623-1-lyude@redhat.com> References: <20200318004159.235623-1-lyude@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ville Syrj=C3=A4l=C3=A4 Add some kind of vblank workers. The interface is similar to regular delayed works, and also allows for re-scheduling. Whatever hardware programming we do in the work must be fast (must at least complete during the vblank, sometimes during the first few scanlines of vblank), so we'll fire up a per-crtc high priority thread for this. [based off patches from Ville Syrj=C3=A4l=C3=A4 , change below to signoff later] Cc: Ville Syrj=C3=A4l=C3=A4 Signed-off-by: Lyude Paul --- drivers/gpu/drm/drm_vblank.c | 322 +++++++++++++++++++++++++++++++++++ include/drm/drm_vblank.h | 34 ++++ 2 files changed, 356 insertions(+) diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c index da7b0b0c1090..06c796b6c381 100644 --- a/drivers/gpu/drm/drm_vblank.c +++ b/drivers/gpu/drm/drm_vblank.c @@ -25,7 +25,9 @@ */ =20 #include +#include #include +#include =20 #include #include @@ -91,6 +93,7 @@ static bool drm_get_last_vbltimestamp(struct drm_device *dev, unsigned int pipe, ktime_t *tvblank, bool in_vblank_irq); +static int drm_vblank_get(struct drm_device *dev, unsigned int pipe); =20 static unsigned int drm_timestamp_precision =3D 20; /* Default to 20 us= ecs. */ =20 @@ -440,6 +443,9 @@ void drm_vblank_cleanup(struct drm_device *dev) drm_core_check_feature(dev, DRIVER_MODESET)); =20 del_timer_sync(&vblank->disable_timer); + + wake_up_all(&vblank->vblank_work.work_wait); + kthread_stop(vblank->vblank_work.thread); } =20 kfree(dev->vblank); @@ -447,6 +453,108 @@ void drm_vblank_cleanup(struct drm_device *dev) dev->num_crtcs =3D 0; } =20 +static int vblank_work_thread(void *data) +{ + struct drm_vblank_crtc *vblank =3D data; + + while (!kthread_should_stop()) { + struct drm_vblank_work *work, *next; + LIST_HEAD(list); + u64 count; + int ret; + + spin_lock_irq(&vblank->dev->event_lock); + + ret =3D wait_event_interruptible_lock_irq(vblank->queue, + kthread_should_stop() || + !list_empty(&vblank->vblank_work.work_list), + vblank->dev->event_lock); + + WARN_ON(ret && !kthread_should_stop() && + list_empty(&vblank->vblank_work.irq_list) && + list_empty(&vblank->vblank_work.work_list)); + + list_for_each_entry_safe(work, next, + &vblank->vblank_work.work_list, + list) { + list_move_tail(&work->list, &list); + work->state =3D DRM_VBL_WORK_RUNNING; + } + + spin_unlock_irq(&vblank->dev->event_lock); + + if (list_empty(&list)) + continue; + + count =3D atomic64_read(&vblank->count); + list_for_each_entry(work, &list, list) + work->func(work, count); + + spin_lock_irq(&vblank->dev->event_lock); + + list_for_each_entry_safe(work, next, &list, list) { + if (work->reschedule) { + list_move_tail(&work->list, + &vblank->vblank_work.irq_list); + drm_vblank_get(vblank->dev, vblank->pipe); + work->reschedule =3D false; + work->state =3D DRM_VBL_WORK_WAITING; + } else { + list_del_init(&work->list); + work->cancel =3D false; + work->state =3D DRM_VBL_WORK_IDLE; + } + } + + spin_unlock_irq(&vblank->dev->event_lock); + + wake_up_all(&vblank->vblank_work.work_wait); + } + + return 0; +} + +static void vblank_work_init(struct drm_vblank_crtc *vblank) +{ + struct sched_param param =3D { + .sched_priority =3D MAX_RT_PRIO - 1, + }; + int ret; + + INIT_LIST_HEAD(&vblank->vblank_work.irq_list); + INIT_LIST_HEAD(&vblank->vblank_work.work_list); + init_waitqueue_head(&vblank->vblank_work.work_wait); + + vblank->vblank_work.thread =3D + kthread_run(vblank_work_thread, vblank, "card %d crtc %d", + vblank->dev->primary->index, vblank->pipe); + + ret =3D sched_setscheduler(vblank->vblank_work.thread, + SCHED_FIFO, ¶m); + WARN_ON(ret); +} + +/** + * drm_vblank_work_init - initialize a vblank work item + * @work: vblank work item + * @crtc: CRTC whose vblank will trigger the work execution + * @func: work function to be executed + * + * Initialize a vblank work item for a specific crtc. + */ +void drm_vblank_work_init(struct drm_vblank_work *work, struct drm_crtc = *crtc, + void (*func)(struct drm_vblank_work *work, u64 count)) +{ + struct drm_device *dev =3D crtc->dev; + struct drm_vblank_crtc *vblank =3D &dev->vblank[drm_crtc_index(crtc)]; + + work->vblank =3D vblank; + work->state =3D DRM_VBL_WORK_IDLE; + work->func =3D func; + INIT_LIST_HEAD(&work->list); +} +EXPORT_SYMBOL(drm_vblank_work_init); + /** * drm_vblank_init - initialize vblank support * @dev: DRM device @@ -481,6 +589,8 @@ int drm_vblank_init(struct drm_device *dev, unsigned = int num_crtcs) init_waitqueue_head(&vblank->queue); timer_setup(&vblank->disable_timer, vblank_disable_fn, 0); seqlock_init(&vblank->seqlock); + + vblank_work_init(vblank); } =20 DRM_INFO("Supports vblank timestamp caching Rev 2 (21.10.2013).\n"); @@ -1825,6 +1935,22 @@ static void drm_handle_vblank_events(struct drm_de= vice *dev, unsigned int pipe) trace_drm_vblank_event(pipe, seq, now, high_prec); } =20 +static void drm_handle_vblank_works(struct drm_vblank_crtc *vblank) +{ + struct drm_vblank_work *work, *next; + u64 count =3D atomic64_read(&vblank->count); + + list_for_each_entry_safe(work, next, &vblank->vblank_work.irq_list, + list) { + if (!vblank_passed(count, work->count)) + continue; + + drm_vblank_put(vblank->dev, vblank->pipe); + list_move_tail(&work->list, &vblank->vblank_work.work_list); + work->state =3D DRM_VBL_WORK_SCHEDULED; + } +} + /** * drm_handle_vblank - handle a vblank event * @dev: DRM device @@ -1866,6 +1992,7 @@ bool drm_handle_vblank(struct drm_device *dev, unsi= gned int pipe) =20 spin_unlock(&dev->vblank_time_lock); =20 + drm_handle_vblank_works(vblank); wake_up(&vblank->queue); =20 /* With instant-off, we defer disabling the interrupt until after @@ -2076,3 +2203,198 @@ int drm_crtc_queue_sequence_ioctl(struct drm_devi= ce *dev, void *data, kfree(e); return ret; } + +/** + * drm_vblank_work_schedule - schedule a vblank work + * @work: vblank work to schedule + * @count: target vblank count + * @nextonmiss: defer until the next vblank if target vblank was missed + * + * Schedule @work for execution once the crtc vblank count reaches @coun= t. + * + * If the crtc vblank count has already reached @count and @nextonmiss i= s + * %false the work starts to execute immediately. + * + * If the crtc vblank count has already reached @count and @nextonmiss i= s + * %true the work is deferred until the next vblank (as if @count has be= en + * specified as crtc vblank count + 1). + * + * If @work is already scheduled, this function will reschedule said wor= k + * using the new @count. + * + * Returns: + * 0 on success, error code on failure. + */ +int drm_vblank_work_schedule(struct drm_vblank_work *work, + u64 count, bool nextonmiss) +{ + struct drm_vblank_crtc *vblank =3D work->vblank; + unsigned long irqflags; + u64 cur_vbl; + int ret =3D 0; + bool rescheduling =3D false; + bool passed; + + spin_lock_irqsave(&vblank->dev->event_lock, irqflags); + + if (work->cancel) + goto out; + + if (work->state =3D=3D DRM_VBL_WORK_RUNNING) { + work->reschedule =3D true; + work->count =3D count; + goto out; + } else if (work->state !=3D DRM_VBL_WORK_IDLE) { + if (work->count =3D=3D count) + goto out; + rescheduling =3D true; + } + + if (work->state !=3D DRM_VBL_WORK_WAITING) { + ret =3D drm_vblank_get(vblank->dev, vblank->pipe); + if (ret) + goto out; + } + + work->count =3D count; + + cur_vbl =3D atomic64_read(&vblank->count); + passed =3D vblank_passed(cur_vbl, count); + if (passed) + DRM_ERROR("crtc %d vblank %llu already passed (current %llu)\n", + vblank->pipe, count, cur_vbl); + + if (!nextonmiss && passed) { + drm_vblank_put(vblank->dev, vblank->pipe); + if (rescheduling) + list_move_tail(&work->list, + &vblank->vblank_work.work_list); + else + list_add_tail(&work->list, + &vblank->vblank_work.work_list); + work->state =3D DRM_VBL_WORK_SCHEDULED; + wake_up_all(&vblank->queue); + } else { + if (rescheduling) + list_move_tail(&work->list, + &vblank->vblank_work.irq_list); + else + list_add_tail(&work->list, + &vblank->vblank_work.irq_list); + work->state =3D DRM_VBL_WORK_WAITING; + } + + out: + spin_unlock_irqrestore(&vblank->dev->event_lock, irqflags); + + return ret; +} +EXPORT_SYMBOL(drm_vblank_work_schedule); + +static bool vblank_work_cancel(struct drm_vblank_work *work) +{ + struct drm_vblank_crtc *vblank =3D work->vblank; + + switch (work->state) { + case DRM_VBL_WORK_RUNNING: + work->cancel =3D true; + work->reschedule =3D false; + /* fall through */ + default: + case DRM_VBL_WORK_IDLE: + return false; + case DRM_VBL_WORK_WAITING: + drm_vblank_put(vblank->dev, vblank->pipe); + /* fall through */ + case DRM_VBL_WORK_SCHEDULED: + list_del_init(&work->list); + work->state =3D DRM_VBL_WORK_IDLE; + return true; + } +} + +/** + * drm_vblank_work_cancel - cancel a vblank work + * @work: vblank work to cancel + * + * Cancel an already scheduled vblank work. + * + * On return @work may still be executing, unless the return + * value is %true. + * + * Returns: + * True if the work was cancelled before it started to excute, false oth= erwise. + */ +bool drm_vblank_work_cancel(struct drm_vblank_work *work) +{ + struct drm_vblank_crtc *vblank =3D work->vblank; + bool cancelled; + + spin_lock_irq(&vblank->dev->event_lock); + + cancelled =3D vblank_work_cancel(work); + + spin_unlock_irq(&vblank->dev->event_lock); + + return cancelled; +} +EXPORT_SYMBOL(drm_vblank_work_cancel); + +/** + * drm_vblank_work_cancel_sync - cancel a vblank work and wait for it to= finish executing + * @work: vblank work to cancel + * + * Cancel an already scheduled vblank work and wait for its + * execution to finish. + * + * On return @work is no longer guaraneed to be executing. + * + * Returns: + * True if the work was cancelled before it started to excute, false oth= erwise. + */ +bool drm_vblank_work_cancel_sync(struct drm_vblank_work *work) +{ + struct drm_vblank_crtc *vblank =3D work->vblank; + bool cancelled; + long ret; + + spin_lock_irq(&vblank->dev->event_lock); + + cancelled =3D vblank_work_cancel(work); + + ret =3D wait_event_lock_irq_timeout(vblank->vblank_work.work_wait, + work->state =3D=3D DRM_VBL_WORK_IDLE, + vblank->dev->event_lock, + 10 * HZ); + + spin_unlock_irq(&vblank->dev->event_lock); + + WARN(!ret, "crtc %d vblank work timed out\n", vblank->pipe); + + return cancelled; +} +EXPORT_SYMBOL(drm_vblank_work_cancel_sync); + +/** + * drm_vblank_work_flush - wait for a scheduled vblank work to finish ex= cuting + * @work: vblank work to flush + * + * Wait until @work has finished executing. + */ +void drm_vblank_work_flush(struct drm_vblank_work *work) +{ + struct drm_vblank_crtc *vblank =3D work->vblank; + long ret; + + spin_lock_irq(&vblank->dev->event_lock); + + ret =3D wait_event_lock_irq_timeout(vblank->vblank_work.work_wait, + work->state =3D=3D DRM_VBL_WORK_IDLE, + vblank->dev->event_lock, + 10 * HZ); + + spin_unlock_irq(&vblank->dev->event_lock); + + WARN(!ret, "crtc %d vblank work timed out\n", vblank->pipe); +} +EXPORT_SYMBOL(drm_vblank_work_flush); diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h index dd9f5b9e56e4..ac9130f419af 100644 --- a/include/drm/drm_vblank.h +++ b/include/drm/drm_vblank.h @@ -203,8 +203,42 @@ struct drm_vblank_crtc { * disabling functions multiple times. */ bool enabled; + + struct { + struct task_struct *thread; + struct list_head irq_list, work_list; + wait_queue_head_t work_wait; + } vblank_work; +}; + +struct drm_vblank_work { + u64 count; + struct drm_vblank_crtc *vblank; + void (*func)(struct drm_vblank_work *work, u64 count); + struct list_head list; + enum { + DRM_VBL_WORK_IDLE, + DRM_VBL_WORK_WAITING, + DRM_VBL_WORK_SCHEDULED, + DRM_VBL_WORK_RUNNING, + } state; + bool cancel : 1; + bool reschedule : 1; }; =20 +int drm_vblank_work_schedule(struct drm_vblank_work *work, + u64 count, bool nextonmiss); +void drm_vblank_work_init(struct drm_vblank_work *work, struct drm_crtc = *crtc, + void (*func)(struct drm_vblank_work *work, u64 count)); +bool drm_vblank_work_cancel(struct drm_vblank_work *work); +bool drm_vblank_work_cancel_sync(struct drm_vblank_work *work); +void drm_vblank_work_flush(struct drm_vblank_work *work); + +static inline bool drm_vblank_work_pending(struct drm_vblank_work *work) +{ + return work->state !=3D DRM_VBL_WORK_IDLE; +} + int drm_vblank_init(struct drm_device *dev, unsigned int num_crtcs); bool drm_dev_has_vblank(const struct drm_device *dev); u64 drm_crtc_vblank_count(struct drm_crtc *crtc); --=20 2.24.1