Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1464875imm; Tue, 22 May 2018 04:39:04 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqIF0gB+peUmsr/bN80fXL+qt3h1hhHPQ3un9/uOUJ5LNW0almbxthLK/1JJdsMge05bpgu X-Received: by 2002:a17:902:2924:: with SMTP id g33-v6mr24596120plb.26.1526989144743; Tue, 22 May 2018 04:39:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526989144; cv=none; d=google.com; s=arc-20160816; b=u+TuqIcr0zD3uuVR1rLi3uUrMVujTELn/oJfoO5bIKC5Jv8JY4e/5ZNT98PHzxpsA5 mFpgc0V8ADB9QbtoQluKrC2gZrGEIU0GwEfCH2KIz/OGcVbKhH2w4iS6lZFEVxjnocKH VukRUORKnwLgu6tFvVfVbadXs+feYVvscW3KEcAXacKfLORsVBQd9W7AvAckeZHn1Yrz KSMgJWJAIkyj6vmiOsaULr29FeswYYCFkxalqJZ/hql2vyGOUgkuJ9wqDhUgIhIpyRLJ esPSjV2zmgwQXP09jESFjlKChQp/dWVyq1JoPX8MnhrH1C8rOrQBT8qr87/e6wpGZNsK +IBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=pNykrOvKsbEc9EdxQ9IeK2awMe8w3lxQPq2a9c8rLDM=; b=o62ckBrqhKLFy561ek2Zta5vaBN+qwS8krYqy2swiS+PKkYbcPYfa+QR1z9KtpaUna LV9R1iNoZ5GlCdOZ4da/DqFQyvv7Rfkni95zmr5W7iPtCNvD28clTZhNXECKarAO7vgt hjMlJajywxxA51A1YdFUxRVy9eNzDs5c2u4YDpvUX+eC9I3FNsVZlrxwdUCnw3UIGI4a 05w9t+y+Vt+B4EOpz46VCLAfJ8ma1ipugHOGLRBivfrcHWomYNYRZaYQVjkMyFTGppmm M7j0bNSWxU3wBOyAZUm0Q+kRKoT97B1T0SmDYYxHMSUsFSBel8q496bgdQdOocm2U7As z9Pw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s1-v6si15546859plr.332.2018.05.22.04.38.50; Tue, 22 May 2018 04:39:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752647AbeEVLc3 (ORCPT + 99 others); Tue, 22 May 2018 07:32:29 -0400 Received: from cloudserver094114.home.pl ([79.96.170.134]:64768 "EHLO cloudserver094114.home.pl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752525AbeEVLcY (ORCPT ); Tue, 22 May 2018 07:32:24 -0400 Received: from 79.184.253.39.ipv4.supernova.orange.pl (79.184.253.39) (HELO aspire.rjw.lan) by serwer1319399.home.pl (79.96.170.134) with SMTP (IdeaSmtpServer 0.83) id d1b8074924d02aca; Tue, 22 May 2018 13:32:22 +0200 From: "Rafael J. Wysocki" To: Viresh Kumar , "Joel Fernandes (Google.)" Cc: linux-kernel@vger.kernel.org, "Joel Fernandes (Google)" , "Rafael J . Wysocki" , Peter Zijlstra , Ingo Molnar , Patrick Bellasi , Juri Lelli , Luca Abeni , Todd Kjos , claudio@evidence.eu.com, kernel-team@android.com, linux-pm@vger.kernel.org Subject: Re: [PATCH v2] schedutil: Allow cpufreq requests to be made even when kthread kicked Date: Tue, 22 May 2018 13:31:45 +0200 Message-ID: <6163159.YL3rkaEOau@aspire.rjw.lan> In-Reply-To: <20180522105429.yn2omuz63oa2w3ez@vireshk-i7> References: <20180518185501.173552-1-joel@joelfernandes.org> <4393838.cQWhzXzUcj@aspire.rjw.lan> <20180522105429.yn2omuz63oa2w3ez@vireshk-i7> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tuesday, May 22, 2018 12:54:29 PM CEST Viresh Kumar wrote: > On 22-05-18, 12:50, Rafael J. Wysocki wrote: > > Ugly indeed. > > Hehe. I was thinking, maybe we can write wrapper helpers around lock/unlock > which are stored as pointers in sg_policy. So that those are only set to > non-NULL values (or non-Noop routines) for slow-switching single policy or > any-switching shared policy systems. Then we can get rid of such conditional > locking attempts :) > > So below is my (compiled-only) version of the $subject patch, obviously based on the Joel's work. Roughly, what it does is to move the fast_switch_enabled path entirely to sugov_update_single() and take the spinlock around sugov_update_commit() in the one-CPU case too. --- kernel/sched/cpufreq_schedutil.c | 57 ++++++++++++++++++++++++++------------- 1 file changed, 38 insertions(+), 19 deletions(-) Index: linux-pm/kernel/sched/cpufreq_schedutil.c =================================================================== --- linux-pm.orig/kernel/sched/cpufreq_schedutil.c +++ linux-pm/kernel/sched/cpufreq_schedutil.c @@ -92,9 +92,6 @@ static bool sugov_should_update_freq(str !cpufreq_can_do_remote_dvfs(sg_policy->policy)) return false; - if (sg_policy->work_in_progress) - return false; - if (unlikely(sg_policy->need_freq_update)) return true; @@ -103,25 +100,25 @@ static bool sugov_should_update_freq(str return delta_ns >= sg_policy->freq_update_delay_ns; } -static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, - unsigned int next_freq) +static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time, + unsigned int next_freq) { - struct cpufreq_policy *policy = sg_policy->policy; - if (sg_policy->next_freq == next_freq) - return; + return false; sg_policy->next_freq = next_freq; sg_policy->last_freq_update_time = time; - if (policy->fast_switch_enabled) { - next_freq = cpufreq_driver_fast_switch(policy, next_freq); - if (!next_freq) - return; + return true; +} - policy->cur = next_freq; - trace_cpu_frequency(next_freq, smp_processor_id()); - } else { +static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, + unsigned int next_freq) +{ + if (!sugov_update_next_freq(sg_policy, time, next_freq)) + return; + + if (!sg_policy->work_in_progress) { sg_policy->work_in_progress = true; irq_work_queue(&sg_policy->irq_work); } @@ -277,6 +274,7 @@ static void sugov_update_single(struct u { struct sugov_cpu *sg_cpu = container_of(hook, struct sugov_cpu, update_util); struct sugov_policy *sg_policy = sg_cpu->sg_policy; + struct cpufreq_policy *policy = sg_policy->policy; unsigned long util, max; unsigned int next_f; bool busy; @@ -307,7 +305,23 @@ static void sugov_update_single(struct u sg_policy->cached_raw_freq = 0; } - sugov_update_commit(sg_policy, time, next_f); + if (policy->fast_switch_enabled) { + if (!sugov_update_next_freq(sg_policy, time, next_f)) + return; + + next_f = cpufreq_driver_fast_switch(policy, next_f); + if (!next_f) + return; + + policy->cur = next_f; + trace_cpu_frequency(next_f, smp_processor_id()); + } else { + raw_spin_lock(&sg_policy->update_lock); + + sugov_update_commit(sg_policy, time, next_f); + + raw_spin_unlock(&sg_policy->update_lock); + } } static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) @@ -376,13 +390,18 @@ sugov_update_shared(struct update_util_d static void sugov_work(struct kthread_work *work) { struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work); + unsigned int next_freq; + unsigned long flags; + + raw_spin_lock_irqsave(&sg_policy->update_lock, flags); + next_freq = sg_policy->next_freq; + sg_policy->work_in_progress = false; + raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags); mutex_lock(&sg_policy->work_lock); - __cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq, + __cpufreq_driver_target(sg_policy->policy, next_freq, CPUFREQ_RELATION_L); mutex_unlock(&sg_policy->work_lock); - - sg_policy->work_in_progress = false; } static void sugov_irq_work(struct irq_work *irq_work)