Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp1709572rwl; Wed, 12 Apr 2023 17:46:30 -0700 (PDT) X-Google-Smtp-Source: AKy350aFBXJN4YVBPIKXJjHWg2weqgOqaWQ6p+D2P79646SqLdULJ98XlHXVoP94l0gieRVbF3zv X-Received: by 2002:a17:906:3c14:b0:94e:13e9:3926 with SMTP id h20-20020a1709063c1400b0094e13e93926mr754953ejg.19.1681346790342; Wed, 12 Apr 2023 17:46:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681346790; cv=none; d=google.com; s=arc-20160816; b=JpNFeHNHeRUhDJ2lw5oXS7enKBfMEGFwPsyzDWtCbimvVU7R8g0vGEuzJUUKWLksef 0nXew6d2hmojJlvCVGbnt6jxZCp7aOucutlaMVVEGbbT4RAEGHPpC82RDF1OJtvLy0MQ Rj02Ka/21ul3OibkIFwd63Zd0YbX1+ANooYLqQ3p8kSV7ql1493z8prpv5nzE5Xtg/3g 74tDUVRr5+jhiJY+7p3rBqRHUmTUX9TARaJmPU6Xqflx0pGjyAwPuAUKNu+2ebFiy7qn ZTY38CuiuInycC1LBr32m13W7ekCwGm1IVdUxrMeCvPPqJmWr3yl6nsaQxEzZVrjYW7B A9Og== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=xDJelgOzNKd6ITIGgjXIICj67KNdoTBJg181sVqgBU0=; b=zdfxUMhBua+wPKaVs2+A5wUQ2DfSbCvow3fVD2YcVzZmQA4YLgxUaEGuT1z5c/24mh dXOTokM3roPjMdsl2fd30gPsxk17NJmRLblsZSiwpn7F/S5MJDQe3NG/PPy0FbkEfgLW /ZNVBEhnd0zobAmtpQrfPi3/do/fwsjdGyG5F9cLnymr9XMygTLxLubON/vDZb2LC2vz DK1g3dKHVd7eD8ul8DW9vWC/H6g81gq7maAqeQPI5WmeDnKDgIpoQO0iplHffR2G5CZa aEG8qYP+Gy00nebUJkWk3o5SuzMpJFgKfIPP783oJqEV3YUd5rHRZDi/LTIuQIK0Z+c1 QtRw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="m/7bHy5W"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kl12-20020a170907994c00b00947fbba33c5si356386ejc.741.2023.04.12.17.46.05; Wed, 12 Apr 2023 17:46:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="m/7bHy5W"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229501AbjDMA1p (ORCPT + 99 others); Wed, 12 Apr 2023 20:27:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229633AbjDMA1n (ORCPT ); Wed, 12 Apr 2023 20:27:43 -0400 Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com [IPv6:2a00:1450:4864:20::134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C73D272A4 for ; Wed, 12 Apr 2023 17:27:40 -0700 (PDT) Received: by mail-lf1-x134.google.com with SMTP id h37so159617lfv.0 for ; Wed, 12 Apr 2023 17:27:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1681345659; x=1683937659; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=xDJelgOzNKd6ITIGgjXIICj67KNdoTBJg181sVqgBU0=; b=m/7bHy5WjQe/exegf+eGD85VJoeZnRiHbQ1efrRTMD+0NR8lirTZ5VIQ6MtWj3WKc+ YSP3hlFLxcpuZmHzIyagDc+HDjadKZlng7bCEpdKDCIXA6lzz42HE/Fj1BSEzoO1P5SX 4xb4ir5ROZye1XoSorEzbvTSTu+6SlE5pd+NLZRIjE1sP/hyKg+ynAm3zqF45BcWXj1J KDeona6hyU5dK+VDAG51Fn+Er0L3RbYeEbnBQGCMaMy7lwp59nuy7VW+M3s9zBjCHFQQ 2BTY4vIkUr+NyGVJowBLCXuzix3EM6F2EZq26HiHQ5ZCAErOdt2LPVywJwN5CddeILn3 AcdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681345659; x=1683937659; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xDJelgOzNKd6ITIGgjXIICj67KNdoTBJg181sVqgBU0=; b=Ts/e5Liz4UK3XzDkAQfJT9Utr3lJA7ffvbZKDJPpP+7PWwol+fdrvHkw2lxB5CTWt5 WeHlIZq8bw/mPx14UJdWN47+cd3dJTDzcEtUeT1JLnoa446f86Vd0s1PCrlzPdYbiEIm IC/Nv6aHw8PeBXbr1lT/lZKkvHQ9deEJc11TZZK80wi3JdMCe71pivYW20m1HnO03x7F 7uwYBEBFfZx1D2kN8qhnPMs+y2QEphaKPMalxApidRsTUoUILN2fHeBacL7CqbWVqVgp ++NwiyAs11SZQEQ4tgPprcJ65OsDT/eHvzxtkEeBnEFVuUG5Ml9Sm7TR4FmFcdpV5Uli eMAg== X-Gm-Message-State: AAQBX9docUQ8S9usUBv8fofeVDwhrexbhzo6hUkyC695xXlMmRSXKYJE o9+UZoUKUUP0laOKRQoPBp3Mug== X-Received: by 2002:ac2:562b:0:b0:4e8:3cf8:5d64 with SMTP id b11-20020ac2562b000000b004e83cf85d64mr210849lff.37.1681345658878; Wed, 12 Apr 2023 17:27:38 -0700 (PDT) Received: from ?IPV6:2001:14ba:a085:4d00::8a5? (dzccz6yyyyyyyyyyybcwt-3.rev.dnainternet.fi. [2001:14ba:a085:4d00::8a5]) by smtp.gmail.com with ESMTPSA id q13-20020ac25a0d000000b004eafa060a6fsm45358lfn.145.2023.04.12.17.27.37 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 12 Apr 2023 17:27:38 -0700 (PDT) Message-ID: Date: Thu, 13 Apr 2023 03:27:37 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.9.0 Subject: Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats Content-Language: en-GB To: Rob Clark Cc: Rodrigo Vivi , dri-devel@lists.freedesktop.org, Rob Clark , Tvrtko Ursulin , "open list:DOCUMENTATION" , linux-arm-msm@vger.kernel.org, Emil Velikov , Christopher Healy , open list , Sean Paul , Boris Brezillon , freedreno@lists.freedesktop.org References: <20230410210608.1873968-1-robdclark@gmail.com> From: Dmitry Baryshkov In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-3.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/04/2023 23:34, Rob Clark wrote: > On Wed, Apr 12, 2023 at 1:19 PM Dmitry Baryshkov > wrote: >> >> On Wed, 12 Apr 2023 at 23:09, Rob Clark wrote: >>> >>> On Wed, Apr 12, 2023 at 5:47 AM Rodrigo Vivi wrote: >>>> >>>> On Wed, Apr 12, 2023 at 10:11:32AM +0200, Daniel Vetter wrote: >>>>> On Wed, Apr 12, 2023 at 01:36:52AM +0300, Dmitry Baryshkov wrote: >>>>>> On 11/04/2023 21:28, Rob Clark wrote: >>>>>>> On Tue, Apr 11, 2023 at 10:36 AM Dmitry Baryshkov >>>>>>> wrote: >>>>>>>> >>>>>>>> On Tue, 11 Apr 2023 at 20:13, Rob Clark wrote: >>>>>>>>> >>>>>>>>> On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter wrote: >>>>>>>>>> >>>>>>>>>> On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote: >>>>>>>>>>> On Mon, Apr 10, 2023 at 2:06 PM Rob Clark wrote: >>>>>>>>>>>> >>>>>>>>>>>> From: Rob Clark >>>>>>>>>>>> >>>>>>>>>>>> Similar motivation to other similar recent attempt[1]. But with an >>>>>>>>>>>> attempt to have some shared code for this. As well as documentation. >>>>>>>>>>>> >>>>>>>>>>>> It is probably a bit UMA-centric, I guess devices with VRAM might want >>>>>>>>>>>> some placement stats as well. But this seems like a reasonable start. >>>>>>>>>>>> >>>>>>>>>>>> Basic gputop support: https://patchwork.freedesktop.org/series/116236/ >>>>>>>>>>>> And already nvtop support: https://github.com/Syllo/nvtop/pull/204 >>>>>>>>>>> >>>>>>>>>>> On a related topic, I'm wondering if it would make sense to report >>>>>>>>>>> some more global things (temp, freq, etc) via fdinfo? Some of this, >>>>>>>>>>> tools like nvtop could get by trawling sysfs or other driver specific >>>>>>>>>>> ways. But maybe it makes sense to have these sort of things reported >>>>>>>>>>> in a standardized way (even though they aren't really per-drm_file) >>>>>>>>>> >>>>>>>>>> I think that's a bit much layering violation, we'd essentially have to >>>>>>>>>> reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to >>>>>>>>>> be in :-) >>>>>>>>> >>>>>>>>> I guess this is true for temp (where there are thermal zones with >>>>>>>>> potentially multiple temp sensors.. but I'm still digging my way thru >>>>>>>>> the thermal_cooling_device stuff) >>>>>>>> >>>>>>>> It is slightly ugly. All thermal zones and cooling devices are virtual >>>>>>>> devices (so, even no connection to the particular tsens device). One >>>>>>>> can either enumerate them by checking >>>>>>>> /sys/class/thermal/thermal_zoneN/type or enumerate them through >>>>>>>> /sys/class/hwmon. For cooling devices again the only enumeration is >>>>>>>> through /sys/class/thermal/cooling_deviceN/type. >>>>>>>> >>>>>>>> Probably it should be possible to push cooling devices and thermal >>>>>>>> zones under corresponding providers. However I do not know if there is >>>>>>>> a good way to correlate cooling device (ideally a part of GPU) to the >>>>>>>> thermal_zone (which in our case is provided by tsens / temp_alarm >>>>>>>> rather than GPU itself). >>>>>>>> >>>>>>>>> >>>>>>>>> But what about freq? I think, esp for cases where some "fw thing" is >>>>>>>>> controlling the freq we end up needing to use gpu counters to measure >>>>>>>>> the freq. >>>>>>>> >>>>>>>> For the freq it is slightly easier: /sys/class/devfreq/*, devices are >>>>>>>> registered under proper parent (IOW, GPU). So one can read >>>>>>>> /sys/class/devfreq/3d00000.gpu/cur_freq or >>>>>>>> /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq. >>>>>>>> >>>>>>>> However because of the components usage, there is no link from >>>>>>>> /sys/class/drm/card0 >>>>>>>> (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0) >>>>>>>> to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit. >>>>>>>> >>>>>>>> Getting all these items together in a platform-independent way would >>>>>>>> be definitely an important but complex topic. >>>>>>> >>>>>>> But I don't believe any of the pci gpu's use devfreq ;-) >>>>>>> >>>>>>> And also, you can't expect the CPU to actually know the freq when fw >>>>>>> is the one controlling freq. We can, currently, have a reasonable >>>>>>> approximation from devfreq but that stops if IFPC is implemented. And >>>>>>> other GPUs have even less direct control. So freq is a thing that I >>>>>>> don't think we should try to get from "common frameworks" >>>>>> >>>>>> I think it might be useful to add another passive devfreq governor type for >>>>>> external frequencies. This way we can use the same interface to export >>>>>> non-CPU-controlled frequencies. >>>>> >>>>> Yeah this sounds like a decent idea to me too. It might also solve the fun >>>>> of various pci devices having very non-standard freq controls in sysfs >>>>> (looking at least at i915 here ...) >>>> >>>> I also like the idea of having some common infrastructure for the GPU freq. >>>> >>>> hwmon have a good infrastructure, but they are more focused on individual >>>> monitoring devices and not very welcomed to embedded monitoring and control. >>>> I still want to check the opportunity to see if at least some freq control >>>> could be aligned there. >>>> >>>> Another thing that complicates that is that there are multiple frequency >>>> domains and controls with multipliers in Intel GPU that are not very >>>> standard or easy to integrate. >>>> >>>> On a quick glace this devfreq seems neat because it aligns with the cpufreq >>>> and governors. But again it would be hard to align with the multiple domains >>>> and controls. But it deserves a look. >>>> >>>> I will take a look to both fronts for Xe: hwmon and devfreq. Right now on >>>> Xe we have a lot less controls than i915, but I can imagine soon there >>>> will be requirements to make that to grow and I fear that we end up just >>>> like i915. So I will take a look before that happens. >>> >>> So it looks like i915 (dgpu only) and nouveau already use hwmon.. so >>> maybe this is a good way to expose temp. Maybe we can wire up some >>> sort of helper for drivers which use thermal_cooling_device (which can >>> be composed of multiple sensors) to give back an aggregate temp for >>> hwmon to report? >> >> The thermal_device already registers the hwmon, see below. The >> question is about linking that hwmon to the drm. Strictly speaking, I >> don't think that we can reexport it in a clean way. >> >> # grep gpu /sys/class/hwmon/hwmon*/name >> /sys/class/hwmon/hwmon15/name:gpu_top_thermal >> /sys/class/hwmon/hwmon24/name:gpu_bottom_thermal > > I can't get excited about userspace relying on naming conventions or > other heuristics like this. As you can guess, me neither. We are not in 2.4 world anymore. > Also, userspace's view of the world is > very much that there is a "gpu card", not a collection of parts. > (Windows seems to have the same view of the world.) So we have the > component framework to assemble the various parts together into the > "device" that userspace expects to deal with. We need to do something > similar for exposing temp and freq. I think we are lookin for something close to device links. We need to create a userspace-visible link from one device to another across device hierarchy. Current device_link API is tied to suspend/resume, but the overall idea seems to be close enough (in my opinion). > >> # ls /sys/class/hwmon/hwmon15/ -l >> lrwxrwxrwx 1 root root 0 Jan 26 08:14 device -> >> ../../thermal_zone15 >> -r--r--r-- 1 root root 4096 Jan 26 08:14 name >> drwxr-xr-x 2 root root 0 Jan 26 08:15 power >> lrwxrwxrwx 1 root root 0 Jan 26 08:12 subsystem -> >> ../../../../../class/hwmon >> -r--r--r-- 1 root root 4096 Jan 26 08:14 temp1_input >> -rw-r--r-- 1 root root 4096 Jan 26 08:12 uevent >> >>> Freq could possibly be added to hwmon (ie. seems like a reasonable >>> attribute to add). Devfreq might also be an option but on arm it >>> isn't necessarily associated with the drm device, whereas we could >>> associate the hwmon with the drm device to make it easier for >>> userspace to find. >> >> Possibly we can register a virtual 'passive' devfreq being driven by >> another active devfreq device. > > That's all fine and good, but it has the same problem that existing > hwmon's associated with the cooling-device have.. > > BR, > -R > >>> >>> BR, >>> -R >>> >>>>> >>>>> I guess it would minimally be a good idea if we could document this, or >>>>> maybe have a reference implementation in nvtop or whatever the cool thing >>>>> is rn. >>>>> -Daniel >>>>> >>>>>> >>>>>>> >>>>>>> BR, >>>>>>> -R >>>>>>> >>>>>>>>> >>>>>>>>>> What might be needed is better glue to go from the fd or fdinfo to the >>>>>>>>>> right hw device and then crawl around the hwmon in sysfs automatically. I >>>>>>>>>> would not be surprised at all if we really suck on this, probably more >>>>>>>>>> likely on SoC than pci gpus where at least everything should be under the >>>>>>>>>> main pci sysfs device. >>>>>>>>> >>>>>>>>> yeah, I *think* userspace would have to look at /proc/device-tree to >>>>>>>>> find the cooling device(s) associated with the gpu.. at least I don't >>>>>>>>> see a straightforward way to figure it out just for sysfs >>>>>>>>> >>>>>>>>> BR, >>>>>>>>> -R >>>>>>>>> >>>>>>>>>> -Daniel >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> BR, >>>>>>>>>>> -R >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> [1] https://patchwork.freedesktop.org/series/112397/ >>>>>>>>>>>> >>>>>>>>>>>> Rob Clark (2): >>>>>>>>>>>> drm: Add fdinfo memory stats >>>>>>>>>>>> drm/msm: Add memory stats to fdinfo >>>>>>>>>>>> >>>>>>>>>>>> Documentation/gpu/drm-usage-stats.rst | 21 +++++++ >>>>>>>>>>>> drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++ >>>>>>>>>>>> drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++- >>>>>>>>>>>> drivers/gpu/drm/msm/msm_gpu.c | 2 - >>>>>>>>>>>> include/drm/drm_file.h | 10 ++++ >>>>>>>>>>>> 5 files changed, 134 insertions(+), 3 deletions(-) >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> 2.39.2 >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Daniel Vetter >>>>>>>>>> Software Engineer, Intel Corporation >>>>>>>>>> http://blog.ffwll.ch >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> With best wishes >>>>>>>> Dmitry >>>>>> >>>>>> -- >>>>>> With best wishes >>>>>> Dmitry >>>>>> >>>>> >>>>> -- >>>>> Daniel Vetter >>>>> Software Engineer, Intel Corporation >>>>> http://blog.ffwll.ch >> >> >> >> -- >> With best wishes >> Dmitry -- With best wishes Dmitry