Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S940146AbcJTN2U (ORCPT ); Thu, 20 Oct 2016 09:28:20 -0400 Received: from mga07.intel.com ([134.134.136.100]:27877 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935074AbcJTN2S (ORCPT ); Thu, 20 Oct 2016 09:28:18 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,371,1473145200"; d="scan'208";a="1056616115" Date: Thu, 20 Oct 2016 16:28:14 +0300 From: Ville =?iso-8859-1?Q?Syrj=E4l=E4?= To: Takashi Iwai Cc: dri-devel@lists.freedesktop.org, Daniel Vetter , linux-kernel@vger.kernel.org Subject: Re: [PATCH] drm/fb-helper: Fix race between deferred_io worker and dirty updater Message-ID: <20161020132814.GT4329@intel.com> References: <20161020132055.9646-1-tiwai@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20161020132055.9646-1-tiwai@suse.de> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2199 Lines: 57 On Thu, Oct 20, 2016 at 03:20:55PM +0200, Takashi Iwai wrote: > Since 4.7 kernel, we've seen the error messages like > > kernel: [TTM] Buffer eviction failed > kernel: qxl 0000:00:02.0: object_init failed for (4026540032, 0x00000001) > kernel: [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to allocate VRAM BO > > on QXL when switching and accessing on VT. The culprit was the generic > deferred_io code (qxl driver switched to it since 4.7). There is a > race between the dirty clip update and the call of callback. > > In drm_fb_helper_dirty(), the dirty clip is updated in the spinlock, > while it kicks off the update worker outside the spinlock. Meanwhile > the update worker clears the dirty clip in the spinlock, too. Thus, > when drm_fb_helper_dirty() is called concurrently, schedule_work() is > called after the clip is cleared in the first worker call. Why does that matter? The first worker should have done all the necessary work already, no? > > The fix is simply moving schedule_work() inside the spinlock. > > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98322 > Bugzilla: https://bugzilla.suse.com/show_bug.cgi?id=1003298 > Fixes: eaa434defaca ('drm/fb-helper: Add fb_deferred_io support') > Signed-off-by: Takashi Iwai > --- > drivers/gpu/drm/drm_fb_helper.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > index 03414bde1f15..bae392dea2cc 100644 > --- a/drivers/gpu/drm/drm_fb_helper.c > +++ b/drivers/gpu/drm/drm_fb_helper.c > @@ -861,9 +861,8 @@ static void drm_fb_helper_dirty(struct fb_info *info, u32 x, u32 y, > clip->y1 = min_t(u32, clip->y1, y); > clip->x2 = max_t(u32, clip->x2, x + width); > clip->y2 = max_t(u32, clip->y2, y + height); > - spin_unlock_irqrestore(&helper->dirty_lock, flags); > - > schedule_work(&helper->dirty_work); > + spin_unlock_irqrestore(&helper->dirty_lock, flags); > } > > /** > -- > 2.10.1 > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Ville Syrj?l? Intel OTC