Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp693144ybp; Wed, 9 Oct 2019 02:45:08 -0700 (PDT) X-Google-Smtp-Source: APXvYqzdGrPBHHBQtHUTn+GKYnaejuK0j+SWGz/vKs9mWtOmUYRPjmesKqSN8IoQzGTB5gS5Dp5L X-Received: by 2002:a50:fc8b:: with SMTP id f11mr2104313edq.98.1570614308302; Wed, 09 Oct 2019 02:45:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570614308; cv=none; d=google.com; s=arc-20160816; b=ZLpnCQtG1Q8ydtKHXyrKWxzDZp0sVdUzre85sgmWZjugu0E90jsZhRzy4Qg5BaeajO /hbQXR/NTOhwhP164kWJP3vCsT2s2IuJ5pIqLTT9wIGBT7ddl5fj6gGNLx0ng3GDhlEY pOFSzjwWv2JHEuAJLi1T05vcVpQOdWNQr7ZnfshCUC3jr/y6Reg7yxpk1occZyktibst X8ZYWUmjVReGTJYMeyzW5kYYplZcWR7pS22jwyR5ByVaOm93K/yQA6XZAoVHmJ3lO21X PrElqzUOd44TIVwb5QwqFzasWx1zTUTcDqVYa+1tdLAIIePrRdpYqmVAMBsjcmVwow9I mOrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=M5B+hIZcjr6P+C0eRYD1feTkm0xhu1FHfs26DW9LX38=; b=z+NcOTDEmhPgLLCACyWbuZ5zdTZM4JAyBLvdrI1X8tX9HnglTowWB6Fn7F5G0v6O5L HGq33flg50TAVL+VwOvkDlltqFFxFJ0B2XaSlgQ1h12MVfe80/Ts722eQaWzn7Xw+IFm DA7bfYeATVv8y18VyTG2ZpNSzVk4wp2ksgjkavHvlhlxauPHo9Xq+kZHQKU5DvuGaGFw nDxN7oleW0smEUV5o3MDr6lg2qxM7Hse/E7vRNCZVRIVF39KuBxoPN+ZP8RrRkHqAXN7 XXsSFna99GKp8bq8oz20t6Io8yUkLLh+E/m3TrljsWXqFz7Hvbk9mTZmvj10GtbnBDpr 3NXA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b12si787490eja.185.2019.10.09.02.44.44; Wed, 09 Oct 2019 02:45:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730459AbfJIJmc (ORCPT + 99 others); Wed, 9 Oct 2019 05:42:32 -0400 Received: from foss.arm.com ([217.140.110.172]:58220 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729575AbfJIJmc (ORCPT ); Wed, 9 Oct 2019 05:42:32 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6BDBB28; Wed, 9 Oct 2019 02:42:31 -0700 (PDT) Received: from [10.1.196.133] (unknown [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 669753F68E; Wed, 9 Oct 2019 02:42:30 -0700 (PDT) Subject: Re: [PATCH] drm/panfrost: Handle resetting on timeout better To: Tomeu Vizoso , Neil Armstrong , Daniel Vetter , David Airlie , Rob Herring Cc: Alyssa Rosenzweig , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org References: <20191007125014.12595-1-steven.price@arm.com> <81430487-0aa0-8c93-653f-d7a608f3dbff@baylibre.com> <4fae2d1e-e399-9e2b-60dc-b8a78333845f@collabora.com> From: Steven Price Message-ID: Date: Wed, 9 Oct 2019 10:42:29 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <4fae2d1e-e399-9e2b-60dc-b8a78333845f@collabora.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-GB Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/10/2019 17:14, Tomeu Vizoso wrote: > On 10/7/19 6:09 AM, Neil Armstrong wrote: >> Hi Steven, >> >> On 07/10/2019 14:50, Steven Price wrote: >>> Panfrost uses multiple schedulers (one for each slot, so 2 in reality), >>> and on a timeout has to stop all the schedulers to safely perform a >>> reset. However more than one scheduler can trigger a timeout at the same >>> time. This race condition results in jobs being freed while they are >>> still in use. >>> >>> When stopping other slots use cancel_delayed_work_sync() to ensure that >>> any timeout started for that slot has completed. Also use >>> mutex_trylock() to obtain reset_lock. This means that only one thread >>> attempts the reset, the other threads will simply complete without doing >>> anything (the first thread will wait for this in the call to >>> cancel_delayed_work_sync()). >>> >>> While we're here and since the function is already dependent on >>> sched_job not being NULL, let's remove the unnecessary checks, along >>> with a commented out call to panfrost_core_dump() which has never >>> existed in mainline. >>> >> >> A Fixes: tags would be welcome here so it would be backported to v5.3 >> >>> Signed-off-by: Steven Price >>> --- >>> This is a tidied up version of the patch orginally posted here: >>> http://lkml.kernel.org/r/26ae2a4d-8df1-e8db-3060-41638ed63e2a%40arm.com >>> >>>   drivers/gpu/drm/panfrost/panfrost_job.c | 17 +++++++++++------ >>>   1 file changed, 11 insertions(+), 6 deletions(-) >>> >>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c >>> b/drivers/gpu/drm/panfrost/panfrost_job.c >>> index a58551668d9a..dcc9a7603685 100644 >>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c >>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c >>> @@ -381,13 +381,19 @@ static void panfrost_job_timedout(struct >>> drm_sched_job *sched_job) >>>           job_read(pfdev, JS_TAIL_LO(js)), >>>           sched_job); >>>   -    mutex_lock(&pfdev->reset_lock); >>> +    if (!mutex_trylock(&pfdev->reset_lock)) >>> +        return; >>>   -    for (i = 0; i < NUM_JOB_SLOTS; i++) >>> -        drm_sched_stop(&pfdev->js->queue[i].sched, sched_job); >>> +    for (i = 0; i < NUM_JOB_SLOTS; i++) { >>> +        struct drm_gpu_scheduler *sched = &pfdev->js->queue[i].sched; >>> + >>> +        drm_sched_stop(sched, sched_job); >>> +        if (js != i) >>> +            /* Ensure any timeouts on other slots have finished */ >>> +            cancel_delayed_work_sync(&sched->work_tdr); >>> +    } >>>   -    if (sched_job) >>> -        drm_sched_increase_karma(sched_job); >>> +    drm_sched_increase_karma(sched_job); >> >> Indeed looks cleaner. >> >>>         spin_lock_irqsave(&pfdev->js->job_lock, flags); >>>       for (i = 0; i < NUM_JOB_SLOTS; i++) { >>> @@ -398,7 +404,6 @@ static void panfrost_job_timedout(struct >>> drm_sched_job *sched_job) >>>       } >>>       spin_unlock_irqrestore(&pfdev->js->job_lock, flags); >>>   -    /* panfrost_core_dump(pfdev); */ >> >> This should be cleaned in another patch ! > > Seems to me that this should be some kind of TODO, see > etnaviv_core_dump() for the kind of things we could be doing. > > Maybe we can delete this line and mention this in the TODO file? Fair enough - I'll split this into a separate patch and add an entry to the TODO file. kbase has a mechanism to "dump on job fault" [1],[2] so we could do something similar. Steve [1] https://gitlab.freedesktop.org/panfrost/mali_kbase/blob/master/driver/product/kernel/drivers/gpu/arm/midgard/backend/gpu/mali_kbase_debug_job_fault_backend.c [2] https://gitlab.freedesktop.org/panfrost/mali_kbase/blob/master/driver/product/kernel/drivers/gpu/arm/midgard/mali_kbase_debug_job_fault.c > Cheers, > > Tomeu > >>>         panfrost_devfreq_record_transition(pfdev, js); >>>       panfrost_device_reset(pfdev); >>> >> >> Thanks, >> Testing it right now with the last change removed (doesn't apply on >> v5.3 with it), >> results in a few hours... or minutes ! >> >> >> Neil >> > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel