Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757747AbaGWJie (ORCPT ); Wed, 23 Jul 2014 05:38:34 -0400 Received: from youngberry.canonical.com ([91.189.89.112]:47848 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757486AbaGWJid (ORCPT ); Wed, 23 Jul 2014 05:38:33 -0400 Message-ID: <53CF828E.5020201@canonical.com> Date: Wed, 23 Jul 2014 11:38:22 +0200 From: Maarten Lankhorst User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0 MIME-Version: 1.0 To: =?UTF-8?B?Q2hyaXN0aWFuIEvDtm5pZw==?= , Daniel Vetter CC: =?UTF-8?B?Q2hyaXN0aWFuIEvDtm5pZw==?= , Thomas Hellstrom , nouveau , LKML , dri-devel , Ben Skeggs , "Deucher, Alexander" Subject: Re: [Nouveau] [PATCH 09/17] drm/radeon: use common fence implementation for fences References: <20140709093124.11354.3774.stgit@patser> <53CE6AFA.1060807@vodafone.de> <53CE84AA.9030703@amd.com> <53CE8A57.2000803@vodafone.de> <53CF58FB.8070609@canonical.com> <53CF5B9F.1050800@amd.com> <53CF5EFE.6070307@canonical.com> <53CF63C2.7070407@vodafone.de> <53CF6622.6060803@amd.com> <53CF699D.9070902@canonical.com> <53CF6B18.5070107@vodafone.de> <53CF7035.2060808@amd.com> <53CF7191.2090008@canonical.com> <53CF765E.7020802@vodafone.de> <53CF8010.9060809@amd.com> <53CF822E.7050601@amd.com> In-Reply-To: <53CF822E.7050601@amd.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org op 23-07-14 11:36, Christian König schreef: > Am 23.07.2014 11:30, schrieb Daniel Vetter: >> On Wed, Jul 23, 2014 at 11:27 AM, Christian König >> wrote: >>> You submit a job to the hardware and then block the job to wait for radeon >>> to be finished? Well than this would indeed require a hardware reset, but >>> wouldn't that make the whole problem even worse? >>> >>> I mean currently we block one userspace process to wait for other hardware >>> to be finished with a buffer, but what you are describing here blocks the >>> whole hardware to wait for other hardware which in the end blocks all >>> userspace process accessing the hardware. >> There is nothing new here with prime - if one context hangs the gpu it >> blocks everyone else on i915. >> >>> Talking about alternative approaches wouldn't it be simpler to just offload >>> the waiting to a different kernel or userspace thread? >> Well this is exactly what we'll do once we have the scheduler. But >> this is an orthogonal issue imo. > > Mhm, could have the scheduler first? > > Cause that sounds like reducing the necessary fence interface to just a fence->wait function. You would also lose benefits like having a 'perf timechart' for gpu's. ~Maarten -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/