Received: by 2002:a05:6a10:d5a5:0:0:0:0 with SMTP id gn37csp493808pxb; Thu, 30 Sep 2021 10:19:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyf2Jrw2YVNzS5xKIugMP9V0mczf7IR95NQntA2eufIVqZbHs/yIZDuX9yKhcq4ueZKnpYr X-Received: by 2002:a17:90b:4d08:: with SMTP id mw8mr14093011pjb.97.1633022375907; Thu, 30 Sep 2021 10:19:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633022375; cv=none; d=google.com; s=arc-20160816; b=Cpv8jMVSViIkvdU+eNUm/Vmgnp1LEC4ze2OaU/jEINGu3RyAuAE+wPgl4SV3yhRJ0i llvvorWD5CAwP6sQEPlSS63Bm0PxkbHTGjgjOpx7lYCEphX4Jszas7W+UsCSGc23ije8 RsuSyrOwLItjhMqqkR9t6VNoT/3fAgkzBrJ5m9eFPQZgnVWh/7/YeKWFnCE2GAZw1cLQ IWn4HeqoN8KQ9uAPhFDvE/71bxK+jZpBTkx0Xw1Q1AF5BEtEA3ynfElV6OHZGH5n4CfH C29xYvBXAaEr+xLrh2ckMacr7wRqN+99mqOPfAW9RsHkPUlveU/AGluHw/9eL+/YkRfg BJfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=detIe70G1rD2Wx7hpUJI4q1N45GrgW11fRQRwpic7qA=; b=WO+jgUnB6QExCvue6Bn+mb8d5YOfhRsehrzCe2E4LOYIviyDpQigjgmRGGJtES7tzU dpHNOhZ7b1JNONmnF/GaOs4FsDnd610qNdYSxBDQWh4fmFNaX/1eyHKVSG68lMazPUlq a/UfxOK8CMTGRIQhiHqgVesHT1R7g6b2e1agd3PG3qY31bpM6iWnPvMmCB0Ey/vAiQbf KyMA+NT9EofeQoUqYH3FG9G1igwWmnIMH/9cyVBsPynIZhq4Q+cRFDT7wrd9mzmL/eke FhWT0J1KkXGsnS1UwHx/LYfrhziA1mRcdBIOr4xjfuW0zlSUW30obmMfZFvnpcASqiKx kpwQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g17si5357804pge.312.2021.09.30.10.19.21; Thu, 30 Sep 2021 10:19:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352485AbhI3RUJ (ORCPT + 99 others); Thu, 30 Sep 2021 13:20:09 -0400 Received: from mga05.intel.com ([192.55.52.43]:4346 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352469AbhI3RUH (ORCPT ); Thu, 30 Sep 2021 13:20:07 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10123"; a="310787650" X-IronPort-AV: E=Sophos;i="5.85,336,1624345200"; d="scan'208";a="310787650" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2021 10:16:20 -0700 X-IronPort-AV: E=Sophos;i="5.85,336,1624345200"; d="scan'208";a="520489378" Received: from dboland-mobl.ger.corp.intel.com (HELO tursulin-mobl2.home) ([10.213.223.141]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2021 10:16:17 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Tvrtko Ursulin , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Subject: [RFC 0/6] CPU + GPU synchronised priority scheduling Date: Thu, 30 Sep 2021 18:15:46 +0100 Message-Id: <20210930171552.501553-1-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Tvrtko Ursulin This is a somewhat early sketch of one of my ideas intended for early feedback from the core scheduler experts. First and last two patches in the series are the most interesting ones for people outside of i915. (Note I did not copy everyone on all patches but just the cover letter for context and the rest should be available from the mailing list.) General idea is that current processing landscape seems to be more and more composed of pipelines where computations are done on multiple hardware devices. Furthermore some of the non-CPU devices, like in this case many GPUs supported by the i915 driver, actually support priority based scheduling which is currently rather inaccesible to the user (in terms of being able to control it from the outside). From these two statements a question arises on how to allow for a simple, effective and consolidated user experience. In other words why user would not be able to do something like: $ nice ffmmpeg ...transcode my videos... $ my-favourite-game And have the nice hint apply to GPU parts of the transcode pipeline as well? Another reason why I started thinking about this is that I noticed Chrome browser for instance uses nice to de-prioritise background tabs. So again, having that decision propagate to the GPU rendering pipeline sounds like a big plus to the overall user experience. This RFC implements this idea with the hairy part being the notifier chain I added to enable dynamic adjustments. It is a global notifier which raises a few questions so I am very curious what experts will think here. Please see the opens in the first patch for more on this. And also the last two patches are the ones which implement a hash table in i915 so it can associate the notifier call- back with the correct GPU rendering contexts. On a more positive note the thing seems to even work as is. For instance I roughly simulated the above scenario by running a GPU hog at three nice levels and a GfxBench TRex in parallel (as a game proxy). This is what I got: GPU hog nice | TRex fps ------------------------------ 0 | 34.8 10 | 38.0 -10 | 30.8 So it is visible the feature can improve the user experience. Question is just if people are happy with this method of implementing it. Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Tvrtko Ursulin (6): sched: Add nice value change notifier drm/i915: Explicitly track DRM clients drm/i915: Make GEM contexts track DRM clients drm/i915: Track all user contexts per client drm/i915: Keep track of registered clients indexed by task struct drm/i915: Connect task and GPU scheduling priorities drivers/gpu/drm/i915/Makefile | 5 +- drivers/gpu/drm/i915/gem/i915_gem_context.c | 20 +++ .../gpu/drm/i915/gem/i915_gem_context_types.h | 6 + .../drm/i915/gt/intel_execlists_submission.c | 2 +- drivers/gpu/drm/i915/i915_drm_client.c | 129 ++++++++++++++++++ drivers/gpu/drm/i915/i915_drm_client.h | 71 ++++++++++ drivers/gpu/drm/i915/i915_drv.c | 6 + drivers/gpu/drm/i915/i915_drv.h | 5 + drivers/gpu/drm/i915/i915_gem.c | 21 ++- drivers/gpu/drm/i915/i915_request.c | 2 +- drivers/gpu/drm/i915/i915_request.h | 5 + drivers/gpu/drm/i915/i915_scheduler.c | 3 +- drivers/gpu/drm/i915/i915_scheduler.h | 14 ++ drivers/gpu/drm/i915/i915_scheduler_types.h | 8 ++ include/linux/sched.h | 5 + kernel/sched/core.c | 37 ++++- 16 files changed, 330 insertions(+), 9 deletions(-) create mode 100644 drivers/gpu/drm/i915/i915_drm_client.c create mode 100644 drivers/gpu/drm/i915/i915_drm_client.h -- 2.30.2