Received: by 2002:a05:7412:cfc7:b0:fc:a2b0:25d7 with SMTP id by7csp2397242rdb; Wed, 21 Feb 2024 06:34:35 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWbPuzTFpS/qHJ5Bmuk+0FtJ4msnf4miuFqXNGZ2A94VAOsP4w6bgUPYW66dpAvbErosOQgdr0XULQdgOEeeRvxD3q5InN9pP12iopX1g== X-Google-Smtp-Source: AGHT+IHZTPvLk3QYA1pXNDyDQ+njK3/MWuvEZhtdduUOQWkzjEnzmjngK+9eBcS0lx1c1lUZbZ4K X-Received: by 2002:a05:620a:4590:b0:787:3519:d27d with SMTP id bp16-20020a05620a459000b007873519d27dmr26032606qkb.36.1708526075179; Wed, 21 Feb 2024 06:34:35 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708526075; cv=pass; d=google.com; s=arc-20160816; b=A1P+hKpW5jHng0D40hDkbRYk3t24p4E4dCCDv0JQdoeejkBoQGDF26bMe8nRRTkUU5 k27srrL7BLsU5wml3ez2YDjGirBDqJ2dfCE/8pbcr/1DQFUEKxXuF8MI7cCRPZ3H+RU+ 4HWVDLeBJ/60mGxKquHID1cTZ6sUSGWG3HZ8cmxjJPK2ftZcnXU9CkdrRDmfJgb7aimU 0jWOXWGGeM27ELGROcA3o94Q6qIJw01QGpib/7IeZWBLnYYn1grXzPKvxQdcyPCsQkkr cycMPNttCldYC3ubLqxMS5xfcp61diPc0t08jPDYNvRcS+gtro74/JxvU/9Qyu/rOeqH EYGQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:date:message-id :dkim-signature; bh=zGbVn6d3Kqtq5jYZ3Ddo7qROyb7lzb4rWtZg8pAy8I0=; fh=8nEH48CJPchkwXOfom5N2704FLDx4eCb4qvdFCUpzg0=; b=l8q6woNk08PB17b31GUKUrp0Eppau1Ue66cIMI1US1MmJOY++s+iwPrRubDd37kzWr uXQp6MJl/sT+OJCMHFsZXjl9UU5e1pf1wy0xtGeaGO7S2oz/2DhYu6cOYTm3+ARMAzFF X7sJkbhyXCNk5DmLBm2CY9mLwslez0FZZQ+zLcbSGtNlpf/Cgm7JB6Lq3RL3ldfFWlmh TdDGNJAchQIhqrhiOHQRv2W4sPjquepT0HO4DShfqN+R8BscCtcP+8Xy3diiv+rjaLpC uinFXDIp3Lh619ffrJ2WsrDQ64j38XvpZiVkWY/nwar0JHehmsyfo3kFq8I+vJ9QeLqd pYGw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VSCgXtTG; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-74903-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-74903-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id l6-20020a05620a210600b00787698dabb7si6718573qkl.564.2024.02.21.06.34.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Feb 2024 06:34:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-74903-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VSCgXtTG; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-74903-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-74903-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id D3FFA1C2419F for ; Wed, 21 Feb 2024 14:34:34 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id EE70B7F7EB; Wed, 21 Feb 2024 14:34:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VSCgXtTG" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC9C969D10 for ; Wed, 21 Feb 2024 14:34:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708526060; cv=none; b=FFnJOmnMycGofkBuRJLjgH54ZgopwTnQhvE0m0H/X7wV5utd3iFYQPAPT5oBHY/xTh5Oi5WoLMzb/oKiqxo7JZLDlsi4QjkzCxMhO86BacvqXN6V8d+HcQQCaTceNQVljGxu72gHOAMAG3oL8B2Vbk2E3itfZ1cvx7CPuLew9z8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708526060; c=relaxed/simple; bh=GfZVq947I9YP7CqrtkkOI/Xke7wAOCBhrZCjc34f1xc=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=U5ijaA0Mz97d8OrMK4c1o1HPwz3q5kD5v1Lsh0e0pYxqMWSvlOnuxx7F19tS9k0nYUZmNRdZsDWDLemcf+TA9HVA+U4juruAy3tPGxGv60gHqIhnBzdGI93Cd2/djVNdTRNdMAlYgmzgLWz9jsk4kMdJIqwuxmb4ruyMxSZjHnQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=VSCgXtTG; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708526059; x=1740062059; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=GfZVq947I9YP7CqrtkkOI/Xke7wAOCBhrZCjc34f1xc=; b=VSCgXtTGRGziZLS2egogHbYhBzRjSFPV54YDlMVlV+aRjKiOXyiq/l5F Rsb1YkJaZVvxYpiXhxYnuAIFIFgDTCi50C8wJV6lg2llRqmPZMT8g8b7N ci0VSSGRl1MpUxZ+maTaVznsGaXbDlbSsHm7WitC8+/ul14CgNFxuYtns dsjo00zfZCT5h7hAUEJ6AEyWdun4oPJiVdSVfq+zSvRihql8O5+cJ/Mya 7p33xaNi5csxcLqEFR1ZOHlarcWS0hbTiWYk0Yia5+1OM9pU3qV5OtBPn 7loAvBDxLhtWeE/Ai++lsM/hkUaEpZ+RHsBor4s1wN6hlp1HrPD+hav5Y g==; X-IronPort-AV: E=McAfee;i="6600,9927,10990"; a="13304972" X-IronPort-AV: E=Sophos;i="6.06,175,1705392000"; d="scan'208";a="13304972" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2024 06:34:13 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,175,1705392000"; d="scan'208";a="5056610" Received: from jdoyle1x-mobl2.ger.corp.intel.com (HELO [10.213.204.109]) ([10.213.204.109]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2024 06:34:11 -0800 Message-ID: <0c001651-0339-4872-bf4f-d1a3e4f2aa43@linux.intel.com> Date: Wed, 21 Feb 2024 14:34:09 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 0/1] Always record job cycle and timestamp information Content-Language: en-US To: =?UTF-8?Q?Adri=C3=A1n_Larumbe?= Cc: Daniel Vetter , Steven Price , Lionel Landwerlin , Boris Brezillon , Rob Herring , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, Umesh Nerlige Ramappa References: <20240214121435.3813983-1-adrian.larumbe@collabora.com> From: Tvrtko Ursulin Organization: Intel Corporation UK Plc In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 21/02/2024 09:40, Adrián Larumbe wrote: > Hi, > > I just wanted to make sure we're on the same page on this matter. So in > Panfrost, and I guess in almost every other single driver out there, HW perf > counters and their uapi interface are orthogonal to fdinfo's reporting on drm > engine utilisation. > > At the moment it seems like HW perfcounters and the way they're exposed to UM > are very idiosincratic and any attempt to unify their interface into a common > set of ioctl's sounds like a gargantuan task I wouldn't like to be faced with. I share the same feeling on this sub-topic. > As for fdinfo, I guess there's more room for coming up with common helpers that > could handle the toggling of HW support for drm engine calculations, but I'd at > least have to see how things are being done in let's say, Freedreno or Intel. For Intel we don't need this ability, well at least for pre-GuC platforms. Stat collection is super cheap and permanently enabled there. But let me copy Umesh because something at the back of my mind is telling me that perhaps there was something expensive about collecting these stats with the GuC backend? If so maybe a toggle would be beneficial there. > Right now there's a pressing need to get rid of the debugfs knob for fdinfo's > drm engine profiling sources in Panfrost, after which I could perhaps draw up an > RFC for how to generalise this onto other drivers. There is a knob currently meaning fdinfo does not work by default? If that is so, I would have at least expected someone had submitted a patch for gputop to handle this toggle. It being kind of a common reference implementation I don't think it is great if it does not work out of the box. The toggle as an idea sounds a bit annoying, but if there is no other realistic way maybe it is not too bad. As long as it is documented in the drm-usage-stats.rst, doesn't live in debugfs, and has some common plumbing implemented both on the kernel side and for the aforementioned gputop / igt_drm_fdinfo / igt_drm_clients. Where and how exactly TBD. Regards, Tvrtko > > On 16.02.2024 17:43, Tvrtko Ursulin wrote: >> >> On 16/02/2024 16:57, Daniel Vetter wrote: >>> On Wed, Feb 14, 2024 at 01:52:05PM +0000, Steven Price wrote: >>>> Hi Adrián, >>>> >>>> On 14/02/2024 12:14, Adrián Larumbe wrote: >>>>> A driver user expressed interest in being able to access engine usage stats >>>>> through fdinfo when debugfs is not built into their kernel. In the current >>>>> implementation, this wasn't possible, because it was assumed even for >>>>> inflight jobs enabling the cycle counter and timestamp registers would >>>>> incur in additional power consumption, so both were kept disabled until >>>>> toggled through debugfs. >>>>> >>>>> A second read of the TRM made me think otherwise, but this is something >>>>> that would be best clarified by someone from ARM's side. >>>> >>>> I'm afraid I can't give a definitive answer. This will probably vary >>>> depending on implementation. The command register enables/disables >>>> "propagation" of the cycle/timestamp values. This propagation will cost >>>> some power (gates are getting toggled) but whether that power is >>>> completely in the noise of the GPU as a whole I can't say. >>>> >>>> The out-of-tree kbase driver only enables the counters for jobs >>>> explicitly marked (BASE_JD_REQ_PERMON) or due to an explicit connection >>>> from a profiler. >>>> >>>> I'd be happier moving the debugfs file to sysfs rather than assuming >>>> that the power consumption is small enough for all platforms. >>>> >>>> Ideally we'd have some sort of kernel interface for a profiler to inform >>>> the kernel what it is interested in, but I can't immediately see how to >>>> make that useful across different drivers. kbase's profiling support is >>>> great with our profiling tools, but there's a very strong connection >>>> between the two. >>> >>> Yeah I'm not sure whether a magic (worse probably per-driver massively >>> different) file in sysfs is needed to enable gpu perf monitoring stats in >>> fdinfo. >>> >>> I get that we do have a bit a gap because the linux perf pmu stuff is >>> global, and you want per-process, and there's kinda no per-process support >>> for perf stats for devices. But that's probably the direction we want to >>> go, not so much fdinfo. At least for hardware performance counters and >>> things like that. >>> >>> Iirc the i915 pmu support had some integration for per-process support, >>> you might want to chat with Tvrtko for kernel side and Lionel for more >>> userspace side. At least if I'm not making a complete mess and my memory >>> is vaguely related to reality. Adding them both. >> >> Yeah there are two separate things, i915 PMU and i915 Perf/OA. >> >> If my memory serves me right I indeed did have a per-process support for i915 >> PMU implemented as an RFC (or at least a branch somewhere) some years back. >> IIRC it only exposed the per engine GPU utilisation and did not find it very >> useful versus the complexity. (I think it at least required maintaining a map >> of drm clients per task.) >> >> Our more useful profiling is using a custom Perf/OA interface (Observation >> Architecture) which is possibly similar to kbase mentioned above. Why it is a >> custom interface is explained in a large comment on top of i915_perf.c. Not >> sure if all of them still hold but on the overall perf does not sound like the >> right fit for detailed GPU profiling. >> >> Also PMU drivers are very challenging to get the implementation right, since >> locking model and atomicity requirements are quite demanding. >> >> From my point of view, at least it is my initial thinking, if custom per >> driver solutions are strongly not desired, it could be interesting to look >> into whether there is enough commonality, in at least concepts, to see if a >> new DRM level common but extensible API would be doable. Even then it may be >> tricky to "extract" enough common code to justify it. >> >> Regards, >> >> Tvrtko >> >>> >>> Cheers, Sima >>> >>> >>>> >>>> Steve >>>> >>>>> Adrián Larumbe (1): >>>>> drm/panfrost: Always record job cycle and timestamp information >>>>> >>>>> drivers/gpu/drm/panfrost/Makefile | 2 -- >>>>> drivers/gpu/drm/panfrost/panfrost_debugfs.c | 21 ------------------ >>>>> drivers/gpu/drm/panfrost/panfrost_debugfs.h | 14 ------------ >>>>> drivers/gpu/drm/panfrost/panfrost_device.h | 1 - >>>>> drivers/gpu/drm/panfrost/panfrost_drv.c | 5 ----- >>>>> drivers/gpu/drm/panfrost/panfrost_job.c | 24 ++++++++------------- >>>>> drivers/gpu/drm/panfrost/panfrost_job.h | 1 - >>>>> 7 files changed, 9 insertions(+), 59 deletions(-) >>>>> delete mode 100644 drivers/gpu/drm/panfrost/panfrost_debugfs.c >>>>> delete mode 100644 drivers/gpu/drm/panfrost/panfrost_debugfs.h >>>>> >>>>> >>>>> base-commit: 6b1f93ea345947c94bf3a7a6e668a2acfd310918 >>>> >>> >>> -- >>> Daniel Vetter >>> Software Engineer, Intel Corporation >>> http://blog.ffwll.ch >