Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp2525504pxu; Mon, 7 Dec 2020 08:43:53 -0800 (PST) X-Google-Smtp-Source: ABdhPJw/ZwS6soWAtsJuzkscZa1f0y0K540H9BYQW6V0zKZ3lU18soti9glsIY0vVmndxnCIeTTj X-Received: by 2002:aa7:c749:: with SMTP id c9mr16934971eds.3.1607359433198; Mon, 07 Dec 2020 08:43:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607359433; cv=none; d=google.com; s=arc-20160816; b=X/wBTEiKtCrsZ5J/PMgdG+kucXVrJMs2ODb3Cp0Lprxf9OBpPGCMMjBny0Y1WMQQZp BEEdhG1BxCWquEc3d0VI/zUOta2TWuSCpDBijuBhJqdDbgnEO+n4c5plHrIb/JNPd5U9 9OFjUesLU1toc3XX5EC4ntErZA9x2VHxoHR7rJTt73wl9WjJ3pa/qm0S+SqZ1xoTVrez 36pp8h+GuLxnQtYON0nbRLUli8bXw0cCGDmaKpVvhCtxwZI3hI4BZHXqXMxLaSp6rVED ZY8LUgpDsLhRoj0r4A3BaESI3qFsufeIMNzjuWQ14BHbh1JfgOlyEWv6WGV3/qcrKttv 6ODg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=cSR658hCL3AH35X0HeRQ9Gp/bktxuRdhktUHDG4PFGA=; b=U9bBpY3xtAaPHk1+6PycigmZVI//RPH0Gn2mxeP8b1hEj2c1Nqrdyweph5jIlh1ry6 u3Nh7w+isGrnlExC0DFBXX/LQsOKs0ZvtAdDZrM/nQGZrx8fS/oRwpEAA9e2aPepb0MT ASYYR45Fk9Quo589LsKIszQOuuWgQjYSbg9pqhzZJZIY3F7QiQeekIxj42YIGOXdyoPi XhuExRgMRDRBS6g4MEveYV44hFitpe8vGJCaYDDstfgVOueZcjcEcmbTFk+I6n0lMVjz kq+itDt54wAOlRonF7CVnz0bT/bfo3TsJ2/qr2cyZPBzPvxnwuUSs9/oyjUoA1fHAqsP NiuA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w6si8633875edj.604.2020.12.07.08.43.27; Mon, 07 Dec 2020 08:43:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727768AbgLGQkd (ORCPT + 99 others); Mon, 7 Dec 2020 11:40:33 -0500 Received: from cloudserver094114.home.pl ([79.96.170.134]:63842 "EHLO cloudserver094114.home.pl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727135AbgLGQkc (ORCPT ); Mon, 7 Dec 2020 11:40:32 -0500 Received: from 89-64-79-106.dynamic.chello.pl (89.64.79.106) (HELO kreacher.localnet) by serwer1319399.home.pl (79.96.170.134) with SMTP (IdeaSmtpServer 0.83.530) id a3622490ce2847ea; Mon, 7 Dec 2020 17:39:49 +0100 From: "Rafael J. Wysocki" To: Linux PM Cc: LKML , Viresh Kumar , Srinivas Pandruvada , Peter Zijlstra , Doug Smythies , Giovanni Gherdovich Subject: [PATCH v1 3/4] cpufreq: Add special-purpose fast-switching callback for drivers Date: Mon, 07 Dec 2020 17:35:52 +0100 Message-ID: <146138074.tjdImvNTH2@kreacher> In-Reply-To: <20360841.iInq7taT2Z@kreacher> References: <20360841.iInq7taT2Z@kreacher> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Rafael J. Wysocki First off, some cpufreq drivers (eg. intel_pstate) can pass hints beyond the current target frequency to the hardware and there are no provisions for doing that in the cpufreq framework. In particular, today the driver has to assume that it should not allow the frequency to fall below the one requested by the governor (or the required capacity may not be provided) which may not be the case and which may lead to excessive energy usage in some scenarios. Second, the hints passed by these drivers to the hardware need not be in terms of the frequency, so representing the utilization numbers coming from the scheduler as frequency before passing them to those drivers is not really useful. Address the two points above by adding a special-purpose replacement for the ->fast_switch callback, called ->adjust_perf, allowing the governor to pass abstract performance level (rather than frequency) values for the minimum (required) and target (desired) performance along with the CPU capacity to compare them to. Also update the schedutil governor to use the new callback instead of ->fast_switch if present. Signed-off-by: Rafael J. Wysocki --- Changes with respect to the RFC: - Don't pass "busy" to ->adjust_perf(). - Use a special 'update_util' hook for the ->adjust_perf() case in schedutil (this still requires an additional branch because of the shared common code between this case and the "frequency" one, but IMV this version is cleaner nevertheless). --- drivers/cpufreq/cpufreq.c | 40 ++++++++++++++++++++++++++++++++ include/linux/cpufreq.h | 14 +++++++++++ include/linux/sched/cpufreq.h | 5 ++++ kernel/sched/cpufreq_schedutil.c | 48 +++++++++++++++++++++++++++++++-------- 4 files changed, 98 insertions(+), 9 deletions(-) Index: linux-pm/include/linux/cpufreq.h =================================================================== --- linux-pm.orig/include/linux/cpufreq.h +++ linux-pm/include/linux/cpufreq.h @@ -320,6 +320,15 @@ struct cpufreq_driver { unsigned int index); unsigned int (*fast_switch)(struct cpufreq_policy *policy, unsigned int target_freq); + /* + * ->fast_switch() replacement for drivers that use an internal + * representation of performance levels and can pass hints other than + * the target performance level to the hardware. + */ + void (*adjust_perf)(unsigned int cpu, + unsigned long min_perf, + unsigned long target_perf, + unsigned long capacity); /* * Caches and returns the lowest driver-supported frequency greater than @@ -588,6 +597,11 @@ struct cpufreq_governor { /* Pass a target to the cpufreq driver */ unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy, unsigned int target_freq); +void cpufreq_driver_adjust_perf(unsigned int cpu, + unsigned long min_perf, + unsigned long target_perf, + unsigned long capacity); +bool cpufreq_driver_has_adjust_perf(void); int cpufreq_driver_target(struct cpufreq_policy *policy, unsigned int target_freq, unsigned int relation); Index: linux-pm/drivers/cpufreq/cpufreq.c =================================================================== --- linux-pm.orig/drivers/cpufreq/cpufreq.c +++ linux-pm/drivers/cpufreq/cpufreq.c @@ -2097,6 +2097,46 @@ unsigned int cpufreq_driver_fast_switch( } EXPORT_SYMBOL_GPL(cpufreq_driver_fast_switch); +/** + * cpufreq_driver_adjust_perf - Adjust CPU performance level in one go. + * @cpu: Target CPU. + * @min_perf: Minimum (required) performance level (units of @capacity). + * @target_perf: Terget (desired) performance level (units of @capacity). + * @capacity: Capacity of the target CPU. + * + * Carry out a fast performance level switch of @cpu without sleeping. + * + * The driver's ->adjust_perf() callback invoked by this function must be + * suitable for being called from within RCU-sched read-side critical sections + * and it is expected to select a suitable performance level equal to or above + * @min_perf and preferably equal to or below @target_perf. + * + * This function must not be called if policy->fast_switch_enabled is unset. + * + * Governors calling this function must guarantee that it will never be invoked + * twice in parallel for the same CPU and that it will never be called in + * parallel with either ->target() or ->target_index() or ->fast_switch() for + * the same CPU. + */ +void cpufreq_driver_adjust_perf(unsigned int cpu, + unsigned long min_perf, + unsigned long target_perf, + unsigned long capacity) +{ + cpufreq_driver->adjust_perf(cpu, min_perf, target_perf, capacity); +} + +/** + * cpufreq_driver_has_adjust_perf - Check "direct fast switch" callback. + * + * Return 'true' if the ->adjust_perf callback is present for the + * current driver or 'false' otherwise. + */ +bool cpufreq_driver_has_adjust_perf(void) +{ + return !!cpufreq_driver->adjust_perf; +} + /* Must set freqs->new to intermediate frequency */ static int __target_intermediate(struct cpufreq_policy *policy, struct cpufreq_freqs *freqs, int index) Index: linux-pm/kernel/sched/cpufreq_schedutil.c =================================================================== --- linux-pm.orig/kernel/sched/cpufreq_schedutil.c +++ linux-pm/kernel/sched/cpufreq_schedutil.c @@ -432,13 +432,11 @@ static inline void ignore_dl_rate_limit( sg_policy->limits_changed = true; } -static void sugov_update_single(struct update_util_data *hook, u64 time, - unsigned int flags) +static bool sugov_update_single_common(struct sugov_cpu *sg_cpu, u64 time, + unsigned int flags) { - struct sugov_cpu *sg_cpu = container_of(hook, struct sugov_cpu, update_util); struct sugov_policy *sg_policy = sg_cpu->sg_policy; unsigned long prev_util = sg_cpu->util; - unsigned int next_f; sugov_iowait_boost(sg_cpu, time, flags); sg_cpu->last_update = time; @@ -446,7 +444,7 @@ static void sugov_update_single(struct u ignore_dl_rate_limit(sg_cpu, sg_policy); if (!sugov_should_update_freq(sg_policy, time)) - return; + return false; sugov_get_util(sg_cpu); sugov_iowait_apply(sg_cpu, time); @@ -458,6 +456,19 @@ static void sugov_update_single(struct u if (sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util) sg_cpu->util = prev_util; + return true; +} + +static void sugov_update_single_freq(struct update_util_data *hook, u64 time, + unsigned int flags) +{ + struct sugov_cpu *sg_cpu = container_of(hook, struct sugov_cpu, update_util); + struct sugov_policy *sg_policy = sg_cpu->sg_policy; + unsigned int next_f; + + if (!sugov_update_single_common(sg_cpu, time, flags)) + return; + next_f = get_next_freq(sg_policy, sg_cpu->util, sg_cpu->max); /* @@ -474,6 +485,20 @@ static void sugov_update_single(struct u } } +static void sugov_update_single_perf(struct update_util_data *hook, u64 time, + unsigned int flags) +{ + struct sugov_cpu *sg_cpu = container_of(hook, struct sugov_cpu, update_util); + + if (!sugov_update_single_common(sg_cpu, time, flags)) + return; + + cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl), + map_util_perf(sg_cpu->util), sg_cpu->max); + + sg_cpu->sg_policy->last_freq_update_time = time; +} + static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) { struct sugov_policy *sg_policy = sg_cpu->sg_policy; @@ -812,6 +837,7 @@ static void sugov_exit(struct cpufreq_po static int sugov_start(struct cpufreq_policy *policy) { struct sugov_policy *sg_policy = policy->governor_data; + void (*uu)(struct update_util_data *data, u64 time, unsigned int flags); unsigned int cpu; sg_policy->freq_update_delay_ns = sg_policy->tunables->rate_limit_us * NSEC_PER_USEC; @@ -831,13 +857,17 @@ static int sugov_start(struct cpufreq_po sg_cpu->sg_policy = sg_policy; } + if (policy_is_shared(policy)) + uu = sugov_update_shared; + else if (policy->fast_switch_enabled && cpufreq_driver_has_adjust_perf()) + uu = sugov_update_single_perf; + else + uu = sugov_update_single_freq; + for_each_cpu(cpu, policy->cpus) { struct sugov_cpu *sg_cpu = &per_cpu(sugov_cpu, cpu); - cpufreq_add_update_util_hook(cpu, &sg_cpu->update_util, - policy_is_shared(policy) ? - sugov_update_shared : - sugov_update_single); + cpufreq_add_update_util_hook(cpu, &sg_cpu->update_util, uu); } return 0; } Index: linux-pm/include/linux/sched/cpufreq.h =================================================================== --- linux-pm.orig/include/linux/sched/cpufreq.h +++ linux-pm/include/linux/sched/cpufreq.h @@ -28,6 +28,11 @@ static inline unsigned long map_util_fre { return (freq + (freq >> 2)) * util / cap; } + +static inline unsigned long map_util_perf(unsigned long util) +{ + return util + (util >> 2); +} #endif /* CONFIG_CPU_FREQ */ #endif /* _LINUX_SCHED_CPUFREQ_H */