Received: by 2002:a05:7412:37c9:b0:e2:908c:2ebd with SMTP id jz9csp2796619rdb; Fri, 22 Sep 2023 08:36:00 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEGHqU7wL6xBa8H+EiycxWCiRfevFHadT0XNBq0XQ23SuoFFY7qLDxe6gCTzZ8nfHy7TPft X-Received: by 2002:aca:f154:0:b0:3a7:3ea1:b5a0 with SMTP id p81-20020acaf154000000b003a73ea1b5a0mr27542oih.47.1695396960276; Fri, 22 Sep 2023 08:36:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695396960; cv=none; d=google.com; s=arc-20160816; b=YBjps7eWHj+zEQYbny2PULylUVTzG051A+Tmma5Ajb9kSQbmgonIyaNmZmtHBluTXY ab6xjhqqvmovFe4QNc0s1TIxxay7kiSj9Zi3RgkOsji+TpfDecC+fFMHlAJti6aG3rWa sPepY9Pxs5/h0ZXBURG+Cf05BiGHS7skNmB8Te0y+xnfYL8oF9k4qNPPvP7P6K2JorzY jGvNg4fv4eqlwAdHWRADGeQk6A4T4bPYr+KbSIR9AAz0+iHmEoevRgVDVbgbZ1Ph3SD7 eC4z3W4ejArhRtkz9T4zuqOT4lgXb0en0UePbxSpGv9c7CGg2fU/awM+zb5uzD+ndweG uinw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :content-language:references:cc:to:subject:user-agent:mime-version :date:message-id; bh=qKGeYaPGAoibDldDFJ6oAcTGOTMbeLK5MVglshjO00s=; fh=7YEBNA6ORlBTXGRf+Le7gDFSBcuGoKoi1C7YeNeb2gs=; b=gHI+/TMpDNG6BoQM/g2YSY8UIFN5ycdEWfPav5vc5/RXHnm1V/0BB+lkCn9r3GWCe4 l5HTFw5kBiGMtzj6CKXl4CXuqLevGV3xF/dLd40PunpIImDlulyBIRjkILi/EmTlqDzv D5cCd61/wV+dzGjWO8ej0vg5LcWPEajSFv9y/wglyT0zB24vgMnU6RY+9nyulJYwMCX1 ApdttUHgOWMiQbTLmvzB2kzp9yWQYoyGQ3IXtJLFN5YKcrtfrrM+HkAcvYCM2N4btJqj JIZ/6Gy+oMfzRTEzX5JupxZtd7gv20ghPBnMyLox79+ZQeOJO2wqi2Irz1T87B4F7yPb 4HiQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id bg26-20020a056a02011a00b00578b9fb24f1si3996362pgb.634.2023.09.22.08.35.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 08:36:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 63288837086E; Fri, 22 Sep 2023 08:24:08 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232744AbjIVPXc (ORCPT + 99 others); Fri, 22 Sep 2023 11:23:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232696AbjIVPX3 (ORCPT ); Fri, 22 Sep 2023 11:23:29 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1EE21A3; Fri, 22 Sep 2023 08:23:21 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DD544DA7; Fri, 22 Sep 2023 08:23:57 -0700 (PDT) Received: from [10.1.27.24] (e122027.cambridge.arm.com [10.1.27.24]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 344493F5A1; Fri, 22 Sep 2023 08:23:17 -0700 (PDT) Message-ID: <1e9e2849-6549-7a67-32e4-5b80ba269f82@arm.com> Date: Fri, 22 Sep 2023 16:23:15 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [PATCH v6 2/6] drm/panfrost: Add fdinfo support GPU load metrics To: Tvrtko Ursulin , =?UTF-8?Q?Adri=c3=a1n_Larumbe?= Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, daniel@ffwll.ch, robdclark@gmail.com, quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org, sean@poorly.run, marijn.suijten@somainline.org, robh@kernel.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, healych@amazon.com, Boris Brezillon , kernel@collabora.com, freedreno@lists.freedesktop.org References: <20230919233556.1458793-1-adrian.larumbe@collabora.com> <20230919233556.1458793-3-adrian.larumbe@collabora.com> <68cbe1af-f485-41a4-111a-c695697ef26f@linux.intel.com> Content-Language: en-GB From: Steven Price In-Reply-To: <68cbe1af-f485-41a4-111a-c695697ef26f@linux.intel.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.2 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Fri, 22 Sep 2023 08:24:08 -0700 (PDT) On 22/09/2023 14:53, Tvrtko Ursulin wrote: > > On 22/09/2023 11:57, Adrián Larumbe wrote: >> On 20.09.2023 16:40, Tvrtko Ursulin wrote: >>> On 20/09/2023 00:34, Adrián Larumbe wrote: >>>> The drm-stats fdinfo tags made available to user space are drm-engine, >>>> drm-cycles, drm-max-freq and drm-curfreq, one per job slot. >>>> >>>> This deviates from standard practice in other DRM drivers, where a >>>> single >>>> set of key:value pairs is provided for the whole render engine. >>>> However, >>>> Panfrost has separate queues for fragment and vertex/tiler jobs, so a >>>> decision was made to calculate bus cycles and workload times >>>> separately. >>>> >>>> Maximum operating frequency is calculated at devfreq initialisation >>>> time. >>>> Current frequency is made available to user space because nvtop uses it >>>> when performing engine usage calculations. >>>> >>>> It is important to bear in mind that both GPU cycle and kernel time >>>> numbers >>>> provided are at best rough estimations, and always reported in >>>> excess from >>>> the actual figure because of two reasons: >>>>    - Excess time because of the delay between the end of a job >>>> processing, >>>>      the subsequent job IRQ and the actual time of the sample. >>>>    - Time spent in the engine queue waiting for the GPU to pick up >>>> the next >>>>      job. >>>> >>>> To avoid race conditions during enablement/disabling, a reference >>>> counting >>>> mechanism was introduced, and a job flag that tells us whether a >>>> given job >>>> increased the refcount. This is necessary, because user space can >>>> toggle >>>> cycle counting through a debugfs file, and a given job might have >>>> been in >>>> flight by the time cycle counting was disabled. >>>> >>>> The main goal of the debugfs cycle counter knob is letting tools >>>> like nvtop >>>> or IGT's gputop switch it at any time, to avoid power waste in case no >>>> engine usage measuring is necessary. >>>> >>>> Signed-off-by: Adrián Larumbe >>>> Reviewed-by: Boris Brezillon >>>> Reviewed-by: Steven Price >>>> --- >>>>    drivers/gpu/drm/panfrost/Makefile           |  2 + >>>>    drivers/gpu/drm/panfrost/panfrost_debugfs.c | 20 ++++++++ >>>>    drivers/gpu/drm/panfrost/panfrost_debugfs.h | 13 +++++ >>>>    drivers/gpu/drm/panfrost/panfrost_devfreq.c |  8 +++ >>>>    drivers/gpu/drm/panfrost/panfrost_devfreq.h |  3 ++ >>>>    drivers/gpu/drm/panfrost/panfrost_device.c  |  2 + >>>>    drivers/gpu/drm/panfrost/panfrost_device.h  | 13 +++++ >>>>    drivers/gpu/drm/panfrost/panfrost_drv.c     | 57 >>>> ++++++++++++++++++++- >>>>    drivers/gpu/drm/panfrost/panfrost_gpu.c     | 41 +++++++++++++++ >>>>    drivers/gpu/drm/panfrost/panfrost_gpu.h     |  4 ++ >>>>    drivers/gpu/drm/panfrost/panfrost_job.c     | 24 +++++++++ >>>>    drivers/gpu/drm/panfrost/panfrost_job.h     |  5 ++ >>>>    12 files changed, 191 insertions(+), 1 deletion(-) >>>>    create mode 100644 drivers/gpu/drm/panfrost/panfrost_debugfs.c >>>>    create mode 100644 drivers/gpu/drm/panfrost/panfrost_debugfs.h >>>> >>>> diff --git a/drivers/gpu/drm/panfrost/Makefile >>>> b/drivers/gpu/drm/panfrost/Makefile >>>> index 7da2b3f02ed9..2c01c1e7523e 100644 >>>> --- a/drivers/gpu/drm/panfrost/Makefile >>>> +++ b/drivers/gpu/drm/panfrost/Makefile >>>> @@ -12,4 +12,6 @@ panfrost-y := \ >>>>        panfrost_perfcnt.o \ >>>>        panfrost_dump.o >>>> +panfrost-$(CONFIG_DEBUG_FS) += panfrost_debugfs.o >>>> + >>>>    obj-$(CONFIG_DRM_PANFROST) += panfrost.o >>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_debugfs.c >>>> b/drivers/gpu/drm/panfrost/panfrost_debugfs.c >>>> new file mode 100644 >>>> index 000000000000..cc14eccba206 >>>> --- /dev/null >>>> +++ b/drivers/gpu/drm/panfrost/panfrost_debugfs.c >>>> @@ -0,0 +1,20 @@ >>>> +// SPDX-License-Identifier: GPL-2.0 >>>> +/* Copyright 2023 Collabora ltd. */ >>>> + >>>> +#include >>>> +#include >>>> +#include >>>> +#include >>>> +#include >>>> + >>>> +#include "panfrost_device.h" >>>> +#include "panfrost_gpu.h" >>>> +#include "panfrost_debugfs.h" >>>> + >>>> +void panfrost_debugfs_init(struct drm_minor *minor) >>>> +{ >>>> +    struct drm_device *dev = minor->dev; >>>> +    struct panfrost_device *pfdev = >>>> platform_get_drvdata(to_platform_device(dev->dev)); >>>> + >>>> +    debugfs_create_atomic_t("profile", 0600, minor->debugfs_root, >>>> &pfdev->profile_mode); >>>> +} >>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_debugfs.h >>>> b/drivers/gpu/drm/panfrost/panfrost_debugfs.h >>>> new file mode 100644 >>>> index 000000000000..db1c158bcf2f >>>> --- /dev/null >>>> +++ b/drivers/gpu/drm/panfrost/panfrost_debugfs.h >>>> @@ -0,0 +1,13 @@ >>>> +/* SPDX-License-Identifier: GPL-2.0 */ >>>> +/* >>>> + * Copyright 2023 Collabora ltd. >>>> + */ >>>> + >>>> +#ifndef PANFROST_DEBUGFS_H >>>> +#define PANFROST_DEBUGFS_H >>>> + >>>> +#ifdef CONFIG_DEBUG_FS >>>> +void panfrost_debugfs_init(struct drm_minor *minor); >>>> +#endif >>>> + >>>> +#endif  /* PANFROST_DEBUGFS_H */ >>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c >>>> b/drivers/gpu/drm/panfrost/panfrost_devfreq.c >>>> index 58dfb15a8757..28caffc689e2 100644 >>>> --- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c >>>> +++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c >>>> @@ -58,6 +58,7 @@ static int panfrost_devfreq_get_dev_status(struct >>>> device *dev, >>>>        spin_lock_irqsave(&pfdevfreq->lock, irqflags); >>>>        panfrost_devfreq_update_utilization(pfdevfreq); >>>> +    pfdevfreq->current_frequency = status->current_frequency; >>>>        status->total_time = ktime_to_ns(ktime_add(pfdevfreq->busy_time, >>>>                               pfdevfreq->idle_time)); >>>> @@ -117,6 +118,7 @@ int panfrost_devfreq_init(struct panfrost_device >>>> *pfdev) >>>>        struct devfreq *devfreq; >>>>        struct thermal_cooling_device *cooling; >>>>        struct panfrost_devfreq *pfdevfreq = &pfdev->pfdevfreq; >>>> +    unsigned long freq = ULONG_MAX; >>>>        if (pfdev->comp->num_supplies > 1) { >>>>            /* >>>> @@ -172,6 +174,12 @@ int panfrost_devfreq_init(struct >>>> panfrost_device *pfdev) >>>>            return ret; >>>>        } >>>> +    /* Find the fastest defined rate  */ >>>> +    opp = dev_pm_opp_find_freq_floor(dev, &freq); >>>> +    if (IS_ERR(opp)) >>>> +        return PTR_ERR(opp); >>>> +    pfdevfreq->fast_rate = freq; >>>> + >>>>        dev_pm_opp_put(opp); >>>>        /* >>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.h >>>> b/drivers/gpu/drm/panfrost/panfrost_devfreq.h >>>> index 1514c1f9d91c..48dbe185f206 100644 >>>> --- a/drivers/gpu/drm/panfrost/panfrost_devfreq.h >>>> +++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.h >>>> @@ -19,6 +19,9 @@ struct panfrost_devfreq { >>>>        struct devfreq_simple_ondemand_data gov_data; >>>>        bool opp_of_table_added; >>>> +    unsigned long current_frequency; >>>> +    unsigned long fast_rate; >>>> + >>>>        ktime_t busy_time; >>>>        ktime_t idle_time; >>>>        ktime_t time_last_update; >>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c >>>> b/drivers/gpu/drm/panfrost/panfrost_device.c >>>> index fa1a086a862b..28f7046e1b1a 100644 >>>> --- a/drivers/gpu/drm/panfrost/panfrost_device.c >>>> +++ b/drivers/gpu/drm/panfrost/panfrost_device.c >>>> @@ -207,6 +207,8 @@ int panfrost_device_init(struct panfrost_device >>>> *pfdev) >>>>        spin_lock_init(&pfdev->as_lock); >>>> +    spin_lock_init(&pfdev->cycle_counter.lock); >>>> + >>>>        err = panfrost_clk_init(pfdev); >>>>        if (err) { >>>>            dev_err(pfdev->dev, "clk init failed %d\n", err); >>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h >>>> b/drivers/gpu/drm/panfrost/panfrost_device.h >>>> index b0126b9fbadc..1e85656dc2f7 100644 >>>> --- a/drivers/gpu/drm/panfrost/panfrost_device.h >>>> +++ b/drivers/gpu/drm/panfrost/panfrost_device.h >>>> @@ -107,6 +107,7 @@ struct panfrost_device { >>>>        struct list_head scheduled_jobs; >>>>        struct panfrost_perfcnt *perfcnt; >>>> +    atomic_t profile_mode; >>>>        struct mutex sched_lock; >>>> @@ -121,6 +122,11 @@ struct panfrost_device { >>>>        struct shrinker shrinker; >>>>        struct panfrost_devfreq pfdevfreq; >>>> + >>>> +    struct { >>>> +        atomic_t use_count; >>>> +        spinlock_t lock; >>>> +    } cycle_counter; >>>>    }; >>>>    struct panfrost_mmu { >>>> @@ -135,12 +141,19 @@ struct panfrost_mmu { >>>>        struct list_head list; >>>>    }; >>>> +struct panfrost_engine_usage { >>>> +    unsigned long long elapsed_ns[NUM_JOB_SLOTS]; >>>> +    unsigned long long cycles[NUM_JOB_SLOTS]; >>>> +}; >>>> + >>>>    struct panfrost_file_priv { >>>>        struct panfrost_device *pfdev; >>>>        struct drm_sched_entity sched_entity[NUM_JOB_SLOTS]; >>>>        struct panfrost_mmu *mmu; >>>> + >>>> +    struct panfrost_engine_usage engine_usage; >>>>    }; >>>>    static inline struct panfrost_device *to_panfrost_device(struct >>>> drm_device *ddev) >>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c >>>> b/drivers/gpu/drm/panfrost/panfrost_drv.c >>>> index a2ab99698ca8..3c93a11deab1 100644 >>>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c >>>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c >>>> @@ -20,6 +20,7 @@ >>>>    #include "panfrost_job.h" >>>>    #include "panfrost_gpu.h" >>>>    #include "panfrost_perfcnt.h" >>>> +#include "panfrost_debugfs.h" >>>>    static bool unstable_ioctls; >>>>    module_param_unsafe(unstable_ioctls, bool, 0600); >>>> @@ -267,6 +268,7 @@ static int panfrost_ioctl_submit(struct >>>> drm_device *dev, void *data, >>>>        job->requirements = args->requirements; >>>>        job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev); >>>>        job->mmu = file_priv->mmu; >>>> +    job->engine_usage = &file_priv->engine_usage; >>>>        slot = panfrost_job_get_slot(job); >>>> @@ -523,7 +525,55 @@ static const struct drm_ioctl_desc >>>> panfrost_drm_driver_ioctls[] = { >>>>        PANFROST_IOCTL(MADVISE,        madvise,    DRM_RENDER_ALLOW), >>>>    }; >>>> -DEFINE_DRM_GEM_FOPS(panfrost_drm_driver_fops); >>>> + >>>> +static void panfrost_gpu_show_fdinfo(struct panfrost_device *pfdev, >>>> +                     struct panfrost_file_priv *panfrost_priv, >>>> +                     struct drm_printer *p) >>>> +{ >>>> +    int i; >>>> + >>>> +    /* >>>> +     * IMPORTANT NOTE: drm-cycles and drm-engine measurements are not >>>> +     * accurate, as they only provide a rough estimation of the >>>> number of >>>> +     * GPU cycles and CPU time spent in a given context. This is >>>> due to two >>>> +     * different factors: >>>> +     * - Firstly, we must consider the time the CPU and then the >>>> kernel >>>> +     *   takes to process the GPU interrupt, which means additional >>>> time and >>>> +     *   GPU cycles will be added in excess to the real figure. >>>> +     * - Secondly, the pipelining done by the Job Manager (2 job >>>> slots per >>>> +     *   engine) implies there is no way to know exactly how much >>>> time each >>>> +     *   job spent on the GPU. >>>> +     */ >>>> + >>>> +    static const char * const engine_names[] = { >>>> +        "fragment", "vertex-tiler", "compute-only" >>>> +    }; >>>> + >>>> +    for (i = 0; i < NUM_JOB_SLOTS - 1; i++) { >>> >>> FWIW you could future proof this a bit by using "i < >>> ARRAY_SIZE(engine_names)" >>> and avoid maybe silent out of bounds reads if someone updates >>> NUM_JOB_SLOTS >>> and forgets about this loop. Or stick a warning of some sort. >>> >> NUM_JOB_SLOTS is actually the same as the number of engines in the >> device. I decided to follow >> this loop convention because that's what's being done across the >> driver when manipulating >> the engine queues, so I thought I'd stick to it for the sake of >> consistency. Bear in mind >> the loop doesn't pick up the compute-only engine because it's still >> not exposed to user space. >> >> So NUM_JOB_SLOTS cannot change, unless a new engine were introduced, >> and then someone would >> have to update this array accordingly. > > Exactly, and until they would, here we'd have a be silent out of bound > memory access. Content of which even gets shared with userspace. ;) I think using NUM_JOB_SLOTS here seems sensible (as Adrián points out it's consistent with the rest of the driver). But a BUILD_BUG_ON checking the array size is could make sense. In reality I don't see the number of job slots ever changing - panfrost is now for the 'old' architecture (panthor being the new driver for later 'CSF' architecture). And even if there was a new design for pre-CSF - it would be a very big change to the architecture: we've kept the 3 slots all the way through even though the 3rd is never used on most GPUs. But equally I've been wrong before ;) Steve >>>> +        drm_printf(p, "drm-engine-%s:\t%llu ns\n", >>>> +               engine_names[i], >>>> panfrost_priv->engine_usage.elapsed_ns[i]); >>>> +        drm_printf(p, "drm-cycles-%s:\t%llu\n", >>>> +               engine_names[i], >>>> panfrost_priv->engine_usage.cycles[i]); >>>> +        drm_printf(p, "drm-maxfreq-%s:\t%lu Hz\n", >>>> +               engine_names[i], pfdev->pfdevfreq.fast_rate); >>>> +        drm_printf(p, "drm-curfreq-%s:\t%lu Hz\n", >>>> +               engine_names[i], pfdev->pfdevfreq.current_frequency); >>> >>> I envisaged a link to driver specific docs at the bottom of >>> drm-usage-stats.rst so it would be nice if drivers would be adding those >>> sections and describing their private keys, engine names etc. ;) >>> >> Currently there's no panfrost.rst file under Documentation/gpu. I >> guess I'll create a new >> one and add the engine descriptions and meaning of drm-curfreq key. > > Yeah I have to do the same for i915 in my memory stats series. :) > > Regards, > > Tvrtko