Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756215Ab3I2Tnp (ORCPT ); Sun, 29 Sep 2013 15:43:45 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:59612 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755910Ab3I2T3D (ORCPT ); Sun, 29 Sep 2013 15:29:03 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Chris Wilson , Stuart Abercrombie , Daniel Vetter Subject: [ 23/71] drm/i915: fix hpd work vs. flush_work in the pageflip code deadlock Date: Sun, 29 Sep 2013 12:27:35 -0700 Message-Id: <20130929192645.106139871@linuxfoundation.org> X-Mailer: git-send-email 1.8.4.6.g82e253f.dirty In-Reply-To: <20130929192643.539596256@linuxfoundation.org> References: <20130929192643.539596256@linuxfoundation.org> User-Agent: quilt/0.60-1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3437 Lines: 93 3.11-stable review patch. If anyone has any objections, please let me know. ------------------ From: Daniel Vetter commit 645416f5adc87c8fae44289cdba7562f3ade8f5c upstream. Historically we've run our own driver hotplug handling in our own work-queue, which then launched the drm core hotplug handling in the system workqueue. This is important since we flush our own driver workqueue in the pageflip code while hodling modeset locks, and only the drm hotplug code grabbed these locks. But with commit 69787f7da6b2adc4054357a661aaa1701a9ca76f Author: Daniel Vetter Date: Tue Oct 23 18:23:34 2012 +0000 drm: run the hpd irq event code directly this was changed and now we could deadlock in our flip handler if there's a hotplug work blocking the progress of the crucial unpin works. So this broke the careful deadlock avoidance implemented in commit b4a98e57fc27854b5938fc8b08b68e5e68b91e1f Author: Chris Wilson Date: Thu Nov 1 09:26:26 2012 +0000 drm/i915: Flush outstanding unpin tasks before pageflipping Since the rule thus far has been that work items on our own workqueue may never grab modeset locks simply restore that rule again. v2: Add a comment to the declaration of dev_priv->wq to warn readers about the tricky implications of using it. Suggested by Chris Wilson. Cc: Chris Wilson Cc: Stuart Abercrombie Reported-by: Stuart Abercrombie References: http://permalink.gmane.org/gmane.comp.freedesktop.xorg.drivers.intel/26239 Reviewed-by: Chris Wilson [danvet: Squash in a comment at the place where we schedule the work. Requested after-the-fact by Chris on irc since the hpd work isn't the only place we botch this.] Signed-off-by: Daniel Vetter Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/i915/i915_drv.h | 7 +++++++ drivers/gpu/drm/i915/i915_irq.c | 9 +++++++-- 2 files changed, 14 insertions(+), 2 deletions(-) --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1091,6 +1091,13 @@ typedef struct drm_i915_private { unsigned int fsb_freq, mem_freq, is_ddr3; + /** + * wq - Driver workqueue for GEM. + * + * NOTE: Work items scheduled here are not allowed to grab any modeset + * locks, for otherwise the flushing done in the pageflip code will + * result in deadlocks. + */ struct workqueue_struct *wq; /* Display functions */ --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -910,8 +910,13 @@ static inline void intel_hpd_irq_handler dev_priv->display.hpd_irq_setup(dev); spin_unlock(&dev_priv->irq_lock); - queue_work(dev_priv->wq, - &dev_priv->hotplug_work); + /* + * Our hotplug handler can grab modeset locks (by calling down into the + * fb helpers). Hence it must not be run on our own dev-priv->wq work + * queue for otherwise the flush_work in the pageflip code will + * deadlock. + */ + schedule_work(&dev_priv->hotplug_work); } static void gmbus_irq_handler(struct drm_device *dev) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/